Triangular number The first six triangular numbers A triangular number or triangle number counts the objects that can form an equilateral triangle, as in the diagram on the right. The nth triangle number is the number of dots composing a triangle with n dots on a side, and is equal to the sum of the n natural numbers from 1 to n. The triangle numbers are given by the following explicit formulas: where is a binomial coefficient. The triangular number Tn solves the "handshake problem" of counting the number of handshakes if each person in a room with n + 1 people shakes hands once with each person. Triangle numbers are the additive analog of the factorials, which are the products of integers from 1 to n. The number of line segments between closest pairs of dots in the triangle can be represented in terms of the number of dots or with a recurrence relation: In the limit, the ratio between the two numbers, dots and line segments is Relations to other figurate numbers[edit] with and where T is a triangular number.
An O(ND) Difference Algorithm and Its Variations BibTeX @ARTICLE{Myers86ano(nd), author = {Eugene W. Myers}, title = {An O(ND) Difference Algorithm and Its Variations}, journal = {Algorithmica}, year = {1986}, volume = {1}, pages = {251--266}} Years of Citing Articles Bookmark OpenURL Abstract The problems of finding a longest common subsequence of two sequences A and B and a shortest edit script for transforming A into B have long been known to be dual problems. Citations Reed's law Reed's law is the assertion of David P. Reed that the utility of large networks, particularly social networks, can scale exponentially with the size of the network. The reason for this is that the number of possible sub-groups of network participants is 2N − N − 1, where N is the number of participants. the number of participants, N, orthe number of possible pair connections, N(N − 1)/2 (which follows Metcalfe's law). so that even if the utility of groups available to be joined is very small on a peer-group basis, eventually the network effect of potential group membership can dominate the overall economics of the system. Derivation[edit] Quote[edit] From David P. "[E]ven Metcalfe's law understates the value created by a group-forming network [GFN] as it grows. Criticism[edit] Other analysts of network value functions, including Andrew Odlyzko and Eric S. See also[edit] References[edit] External links[edit]
Peterson's algorithm Peterson's algorithm (AKA Peterson's solution) is a concurrent programming algorithm for mutual exclusion that allows two processes to share a single-use resource without conflict, using only shared memory for communication. It was formulated by Gary L. Peterson in 1981.[1] While Peterson's original formulation worked with only two processes, the algorithm can be generalized for more than two,[2] as shown below. The algorithm[edit] The algorithm uses two variables, flag and turn. The algorithm does satisfy the three essential criteria to solve the critical section problem, provided that changes to the turn, flag[0], and flag[1] variables propagate immediately and atomically. Mutual exclusion[edit] P0 and P1 can never be in the critical section at the same time: If P0 is in its critical section, then flag[0] is true. Progress[edit] Bounded waiting[edit] Filter algorithm: Peterson's algorithm for N processes[edit] The filter algorithm generalizes Peterson's algorithm for N processes. [edit]
Metcalfe's law Two telephones can make only one connection, five can make 10 connections, and twelve can make 66 connections. Metcalfe's law states that the value of a telecommunications network is proportional to the square of the number of connected users of the system (n2). First formulated in this form by George Gilder in 1993,[1] and attributed to Robert Metcalfe in regard to Ethernet, Metcalfe's law was originally presented, circa 1980, not in terms of users, but rather of "compatible communicating devices" (for example, fax machines, telephones, etc.)[2] Only more recently with the launch of the Internet did this law carry over to users and networks as its original intent was to describe Ethernet purchases and connections.[3] The law is also very much related to economics and business management, especially with competitive companies looking to merge with one another. Network effects[edit] Limitations[edit] Business practicalities[edit] Modified models[edit] See also[edit] References[edit]
John Graham-Cumming: Monte Carlo simulation of One Banana, Two Banana to develop a card counting strategy The children's game One Banana, Two Banana is a high stakes game of probability theory in action. Or something like that. Actually, it's a fun game where you have to take probability into account when deciding what to do. Conceptually, the game is simple. The total distance they move on the board (winning is a simple first past the post system) is the sum of the number of bananas on the banana cards. There are six banana skin cards to start with and as they are removed from the pack they are placed on a special card for all to see. So I wrote a little program that simulates One Banana, Two Banana games (or at least board positions) and see what the expected score is depending on the number of cards that the player chooses to pick. First, here's the code: # The simulation runs through all the possible card positions and# plays a large number of random draws for each possible number of# cards a player might draw. use strict;use warnings; # This is a two dimensional array. my @skins;
Thinking Networks II First presented to The Developing Group on 3 June 2006 (an earlier version, Thinking Networks I, was presented on on 5 June 2004) Thinking Networks II James Lawley Contents 1. a. 4. Note: There are three types of description in this paper: i. ii. iii. 1. I'll start with a tribute to Fritjof Capra who made an early and significant contribution to bringing the importance of networks to my attention when he asked: "Is there a common pattern of organization that can be identified in all living systems? The kinds of networks we shall be considering are complex adaptive or complex dynamic networks. However, none of this would be very interesting if it wasn't for the fact that inspite of their complexity, inspite of the adaptive and dynamic nature of these networks, recent research has shown them to have remarkably consistentpatterns of organisation. "We're accustomed to thinking in terms of centralized control, clear chains of command, the straightforward logic of cause and effect.
Damn Cool Algorithms: Levenshtein Automata Posted by Nick Johnson | Filed under python, coding, tech, damn-cool-algorithms In a previous Damn Cool Algorithms post, I talked about BK-trees, a clever indexing structure that makes it possible to search for fuzzy matches on a text string based on Levenshtein distance - or any other metric that obeys the triangle inequality. Today, I'm going to describe an alternative approach, which makes it possible to do fuzzy text search in a regular index: Levenshtein automata. Introduction The basic insight behind Levenshtein automata is that it's possible to construct a Finite state automaton that recognizes exactly the set of strings within a given Levenshtein distance of a target word. Of course, if that were the only benefit of Levenshtein automata, this would be a short article. Construction and evaluation The diagram on the right shows the NFA for a Levenshtein automaton for the word 'food', with maximum edit distance 2. Because this is an NFA, there can be multiple active states. Indexing
Visualization and evolution of the scientific structure of fuzzy sets research in Spain A.G. López-Herrera, M.J. Cobo, E. Herrera-Viedma, F. R. E. Introduction Fuzzy set theory, which was founded by Zadeh (1965), has emerged as a powerful way of representing quantitatively and manipulating the imprecision in problems. In Spain, the first paper on fuzzy set theory was published by Trillas and Riera (1978). According ISI Web of Science, from 1965 more than one hundred thousand papers on fuzzy set theory foundations and applications have been published in journals. In this paper, the first bibliometric study is presented analysing the research carried out by the Spanish fuzzy set community. The study reveals the main themes treated by the Spanish fuzzy set theory community from 1978 to 2008. they must be in the top ten of the most productive countries (according to data in the ISI Web of Science); just two countries for each geographical area (America, Europe and Asia) are considered; and their first paper on the topic had to be published before 1980 (inclusive). Methods
John Graham-Cumming: It's time to build the Analytical Engine Normal people have a small part of their brain that acts as a sort of automatic limiter. They get some crazy idea like writing a book or campaigning for a government apology or calculating the number of legal track layouts for a cheap train set and their limiter goes: "Don't be ridiculous" and they go back to normal life. Unfortunately, I was born with that piece missing. So, it's not without trepidation that I say that it's time Britain built the Analytical Engine. After the wonderful reconstruction of the Difference Engine we need to finish Babbage's dream of a steam-powered, general-purpose computer. The Analytical Engine has all the hallmarks of a modern computer: it has a program (on punched cards), a CPU (called the 'mill') for doing calculations and it has memory. From Flickr user csixty4 What a marvel it would be to stand before this giant metal machine, powered by a steam engine, and running programs fed to it on a reel of punched cards. From Flickr user gastev Am I mad?
Recursive Algorithms Copyright © 1996-97 Kenneth J. Goldman When we defined the Rational class, we didn't reduce the fractions, so we ended up with numbers like 10/8 and 4/6. The mathematical definition is: The greatest common divisor (GCD) of two integers m and n is the greatest integer that divides both m and n with no remainder. So, we'd like a procedure Thus, the problem is: Given integers m and n such that m >= n > 0, find the GCD of m and n. To solve this problem, the mathematical definition isn't enough. We need an algorithm: a method for computing process. So far, the procedures we have written contained only simple formulae and occasional conditional statements. Before considering possible GCD algorithms, let's design algorithms for some simpler problems. Example: factorial Recall that n! We want to build a procedure Let's try to devise an algorithm straight from the mathematical definition. int factorial(int n) { if (n == 0) return 1; else return (n * factorial(n-1)); } Will this work? This looks good.
Simulated annealing This notion of slow cooling is implemented in the Simulated Annealing algorithm as a slow decrease in the probability of accepting worse solutions as it explores the solution space. Accepting worse solutions is a fundamental property of metaheuristics because it allows for a more extensive search for the optimal solution. The method was independently described by Scott Kirkpatrick, C. Daniel Gelatt and Mario P. Vecchi in 1983,[1] and by Vlado Černý in 1985.[2] The method is an adaptation of the Metropolis-Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, invented by M.N. Overview[edit] Simulated annealing solving a hill climbing problem. The state of some physical systems, and the function E(s) to be minimized is analogous to the internal energy of the system in that state. The basic iteration[edit] The neighbours of a state[edit] Acceptance probabilities[edit] The probability of making the transition from the current state to a candidate new state