Selection bias
Selection bias is a statistical bias in which there is an error in choosing the individuals or groups to take part in a scientific study.[1] It is sometimes referred to as the selection effect. The phrase "selection bias" most often refers to the distortion of a statistical analysis, resulting from the method of collecting samples. If the selection bias is not taken into account, then some conclusions of the study may not be accurate. Types[edit] There are many types of possible selection bias, including: Sampling bias[edit] A distinction of sampling bias (albeit not a universally accepted one) is that it undermines the external validity of a test (the ability of its results to be generalized to the rest of the population), while selection bias mainly addresses internal validity for differences or similarities found in the sample at hand. Time interval[edit] Exposure[edit] Data[edit] Studies[edit] Attrition[edit] Observer selection[edit] Avoidance[edit] Related issues[edit] See also[edit]
Synchronicity
Synchronicity is the occurrence of two or more events that appear to be meaningfully related but not causally related. Synchronicity holds that such events are "meaningful coincidences". The concept of synchronicity was first defined by Carl Jung, a Swiss psychiatrist, in the 1920s.[1] During his career, Jung furnished several slightly different definitions of it.[2] Jung variously defined synchronicity as an "acausal connecting (togetherness) principle," "meaningful coincidence," and "acausal parallelism." He introduced the concept as early as the 1920s but gave a full statement of it only in 1951 in an Eranos lecture.[3] In 1952, he published a paper "Synchronizität als ein Prinzip akausaler Zusammenhänge" (Synchronicity – An Acausal Connecting Principle)[4] in a volume which also contained a related study by the physicist and Nobel laureate Wolfgang Pauli.[5] In his book Synchronicity: An Acausal Connecting Principle, Jung wrote:[6] Description[edit] Examples[edit] Criticisms[edit]
Argumentum ad populum
In argumentation theory, an argumentum ad populum (Latin for "argument to the people") is a fallacious argument that concludes that a proposition must be true because many or most people believe it, often concisely encapsulated as: "If many believe so, it is so." This type of argument is known by several names,[1] including appeal to the masses, appeal to belief, appeal to the majority, appeal to democracy, appeal to popularity, argument by consensus, consensus fallacy, authority of the many, bandwagon fallacy, vox populi,[2] and in Latin as argumentum ad numerum ("appeal to the number"), fickle crowd syndrome, and consensus gentium ("agreement of the clans"). It is also the basis of a number of social phenomena, including communal reinforcement and the bandwagon effect. The Chinese proverb "three men make a tiger" concerns the same idea. Evidence[edit] One could claim that smoking is a healthy pastime, since millions of people do it. Exceptions[edit] Language[edit] Reversals[edit]
Illusory correlation
History[edit] "Illusory correlation" was originally coined by Chapman and Chapman (1967) to describe people's tendencies to overestimate relationships between two groups when distinctive and unusual information is presented.[5][6] The concept was used to question claims about objective knowledge in clinical psychology through the Chapmans' refutation of many clinicians' widely used Wheeler signs for homosexuality in Rorschach tests.[7] Example[edit] David Hamilton and Robert Gifford (1976) conducted a series of experiments that demonstrated how stereotypic beliefs regarding minorities could derive from illusory correlation processes.[8] To test their hypothesis, Hamilton and Gifford had research participants read a series of sentences describing either desirable or undesirable behaviors, which were attributed to either Group A or Group B.[5] Abstract groups were used so that no previously established stereotypes would influence results. Theories[edit] General theory[edit] Age[edit]
Juggling by numbers: How notation revealed new tricks
19 December 2012Last updated at 20:12 ET By Laura Gray BBC News The mathematical formula of juggling Juggling is usually associated with brightly coloured balls and clowning around, but it has more connections than you might think with the world of numbers. Colin Wright is a mathematician who in the 1980s helped develop a notation system for juggling while at Cambridge University. He was frustrated that there was no way to write down juggling moves. "There was a juggling move called Mills Mess and when I tried to write it down I couldn't. The system he helped devise became known as Siteswap. These sequences encoded the number of beats of each throw, which is related to their height and the hand to which the throw is made. Sequences of numbers are used to denote particular juggling moves also known as "Siteswap" The higher the ball is thrown, the bigger the number, so throwing a four means you are throwing the ball higher than a two. The numbers are then written into sequences. “Start Quote
Anchoring
Anchoring or focalism is a cognitive bias that describes the common human tendency to rely too heavily on the first piece of information offered (the "anchor") when making decisions. During decision making, anchoring occurs when individuals use an initial piece of information to make subsequent judgments. Once an anchor is set, other judgments are made by adjusting away from that anchor, and there is a bias toward interpreting other information around the anchor. For example, the initial price offered for a used car sets the standard for the rest of the negotiations, so that prices lower than the initial price seem more reasonable even if they are still higher than what the car is really worth. Focusing effect[edit] The focusing effect (or focusing illusion) is a cognitive bias that occurs when people place too much importance on one aspect of an event, causing an error in accurately predicting the utility of a future outcome.[1] Anchoring and adjustment heuristic[edit] or reversed as .
Survivorship bias
Survivorship bias is the logical error of concentrating on the people or things that "survived" some process and inadvertently overlooking those that did not because of their lack of visibility. This can lead to false conclusions in several different ways. The survivors may literally be people, as in a medical study, or could be companies or research subjects or applicants for a job, or anything that must make it past some selection process to be considered further. Survivorship bias can lead to overly optimistic beliefs because failures are ignored, such as when companies that no longer exist are excluded from analyses of financial performance. Survivorship bias is a type of selection bias. In finance[edit] In finance, survivorship bias is the tendency for failed companies to be excluded from performance studies because they no longer exist. For example, a mutual fund company's selection of funds today will include only those that are successful now. As a general experimental flaw[edit]
Neil deGrasse Tyson and Neil Gaiman Describe Vision & Brilliance