background preloader

A-Flowchart-to-Help-You-Determine-if-Yoursquore-Having-a-Rational-Discussion.jpg (JPEG Image, 622x866 pixels)

A-Flowchart-to-Help-You-Determine-if-Yoursquore-Having-a-Rational-Discussion.jpg (JPEG Image, 622x866 pixels)

Dunning–Kruger effect Cognitive bias about one's own skill The Dunning–Kruger effect is a cognitive bias in which people with limited competence in a particular domain overestimate their abilities. It was first described by Justin Kruger and David Dunning in 1999. Some researchers also include the opposite effect for high performers: their tendency to underestimate their skills. Numerous similar studies have been done. There is disagreement about the causes of the Dunning–Kruger effect. There is also disagreement about where the effect applies and about how strong it is, as well as about its practical consequences. The Dunning–Kruger effect is defined as the tendency of people with low ability in a specific area to give overly positive assessments of this ability. Not knowing the scope of your own ignorance is part of the human condition. David Dunning Some researchers include a metacognitive component in their definition. Measurement, analysis, and investigated tasks [edit] Practical significance

File Sharing Following the death of Napster, all of the file sharing networks that rose to main-stream popularity were decentralized. The most popular networks include Gnutella (which powers Limewire, BearShare, and Morpheus) and FastTrack (which powers KaZaA and Grokster). The decentralization provides legal protection for the companies that distribute the software, since they do not have to run any component of the network themselves: once you get the software, you become part of the network, and the network could survive even if the parent company disappears. All of these networks operate as a web or mesh of neighboring node connections. When you search for files in the network, you send a search request to your neighbors, they send the request on to their neighbors, and so on. Notice the "My Address" portion of these responses. To download a file from the blue node, your node makes a direct connection to it using the address 128.223.12.122. The Key Privacy Weakness Another Possible Spy Tactic

P-value In statistical significance testing, the p-value is the probability of obtaining a test statistic result at least as extreme as the one that was actually observed, assuming that the null hypothesis is true.[1][2] A researcher will often "reject the null hypothesis" when the p-value turns out to be less than a certain significance level, often 0.05 or 0.01. Such a result indicates that the observed result would be highly unlikely under the null hypothesis. Many common statistical tests, such as chi-squared tests or Student's t-test, produce test statistics which can be interpreted using p-values. In a statistical test, the p-value is the probability of getting the same value for a model built around two hypotheses, one is the "neutral" (or "null") hypothesis, the other is the hypothesis under testing. If this p-value is less than or equal to the threshold value previously set (traditionally 5% or 1% [5]), one rejects the neutral hypothesis and accepts the test hypothesis as valid . , then

Lesson 1186 Yes, it's only September 30th today, but I figured you should get at least a day to plan for Sages' Day, right? Years ago in college, I was abroad for April Fools' Day, so I belatedly celebrated it with October Fools' Day by kidnapping a friend's guinea pig and taking pictures with it in a casserole dish as I held a knife over it and placed it in a microwave. I wish I'd gone for Sages' Day, now, though. Guess it's time to make up for it! Two quick notes: major thanks to everyone who helped transcribe all the past STW! Also, do you live in Minnesota?

Statistical hypothesis testing A statistical hypothesis test is a method of statistical inference using data from a scientific study. In statistics, a result is called statistically significant if it has been predicted as unlikely to have occurred by chance alone, according to a pre-determined threshold probability, the significance level. The phrase "test of significance" was coined by statistician Ronald Fisher.[1] These tests are used in determining what outcomes of a study would lead to a rejection of the null hypothesis for a pre-specified level of significance; this can help to decide whether results contain enough information to cast doubt on conventional wisdom, given that conventional wisdom has been used to establish the null hypothesis. The critical region of a hypothesis test is the set of all outcomes which cause the null hypothesis to be rejected in favor of the alternative hypothesis. Variations and sub-classes[edit] The testing process[edit] An alternative process is commonly used: Interpretation[edit]

unlock brainpower Zimmer: There is a lot of work lately in understanding how perception translates into action, making sense of what goes on when we make a decision to do something. Wang: Some neuroscientists who are studying these processes are interested in the idea that perhaps you could have a brain center that gathers evidence and reaches a threshold for making a commitment. There might be another brain center that expresses confidence in the decision or even the very awareness of the decision. Here’s an example that many of you may have encountered from everyday life. You may be presented with a dilemma—say, whether to take a job in a new city. So you can be pretty committed to a decision yet be unaware of it. Zimmer: Mike, you’ve been working with legal scholars to try to bring insights from neuroscience to the law. When you have this basic insight, then you realize that new knowledge about who we are is going to change how we think about the law. Audience member?

Wikiversity 12 Rules for Life Sometimes we just need to remember what the 12 Rules of Life really are: 1. Never give yourself a haircut after three margaritas. 2. You need only two tools: WD-40 and duct tape. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. Analysis of variance Analysis of variance (ANOVA) is a collection of statistical models used to analyze the differences between group means and their associated procedures (such as "variation" among and between groups), developed by R.A. Fisher. In the ANOVA setting, the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether or not the means of several groups are equal, and therefore generalizes the t-test to more than two groups. Doing multiple two-sample t-tests would result in an increased chance of committing a type I error. For this reason, ANOVAs are useful in comparing (testing) three or more means (groups or variables) for statistical significance. Motivating example[edit] No fit. Fair fit Very good fit The analysis of variance can be used as an exploratory tool to explain observations. Background and terminology[edit] Additionally: Design-of-experiments terms[edit] Blocking

Soft Skills Entries Tagged as 'Soft Skills' The Ancient Art of Story Telling – Revisited July 15th, 2013 · Comments Off The value of telling stories is nothing new, and is probably as old as language itself. Stories have energized and captivated the attention of humans throughout history. [Read more →] Tags: Soft Skills Do You Have a Sixth Sense About Projects? June 11th, 2013 · Comments Off In a Tech Republic article “10 highly valued soft skills for IT pros”, one of the ten soft skills was termed “a sixth sense about projects.” [Read more →] Tags: Project Management Process · Soft Skills Are Project Managers “Builders”? May 3rd, 2013 · Comments Off I just read a really interesting post on LinkedIn, and it outlines four job types, which makes a lot of sense. [Read more →] The Legendary Brian Tracy Is One of My Favorites April 15th, 2013 · Comments Off Brian Tracy is one of my favorite teachers and trainers. [Read more →] The Sun Rises in the East…or At Least It Did Yesterday! March 20th, 2013 · Comments Off

Ishikawa diagram Ishikawa diagrams (also called fishbone diagrams, herringbone diagrams, cause-and-effect diagrams, or Fishikawa) are causal diagrams created by Kaoru Ishikawa (1968) that show the causes of a specific event.[1][2] Common uses of the Ishikawa diagram are product design and quality defect prevention, to identify potential factors causing an overall effect. Each cause or reason for imperfection is a source of variation. Causes are usually grouped into major categories to identify these sources of variation. Overview[edit] Ishikawa diagram, in fishbone shape, showing factors of Equipment, Process, People, Materials, Environment and Management, all affecting the overall problem. Ishikawa diagrams were popularized by Kaoru Ishikawa[3] in the 1960s, who pioneered quality management processes in the Kawasaki shipyards, and in the process became one of the founding fathers of modern management. Causes[edit] Causes in the diagram are often categorized, such as to the 6 M's, described below.

Mentat Wiki This wiki is a collaborative environment for exploring ways to become a better thinker. Topics that can be explored here include MemoryTechniques, MentalMath, CriticalThinking, BrainStorming, ShorthandSystems, NotebookSystems, and SmartDrugs. Other relevant topics are also welcome. SiteNews Wiki Topics Mindhacker: The support page for the 2011 book by RonHaleEvans and MartyHaleEvans. MindPerformanceHacks: The support page for the 2006 book of the same name by RonHaleEvans. Easily memorize complex information - MemoryTechnique Do hard math in your head - MentalMath Improve your intelligence Think better Other pages What is a Wiki? A wiki is a web site built collaboratively by a community of users. Feel free to add your own content to this wiki. The Mentat Wiki is powered by Oddmuse, and hosted by the Center for Ludic Synergy.

Related: