background preloader

Free Statistical Software - FreeStatistics.info

The Order of Operations: PEMDAS Purplemath If you are asked to simplify something like "4 + 2×3", the question that naturally arises is "Which way do I do this? Because there are two options!" I could add first: ...or I could multiply first: Which answer is the right one? MathHelp.com It seems as though the answer depends on which way you look at the problem. To eliminate this confusion, we have some rules of precedence, established at least as far back as the 1500s, called the "order of operations". A common technique for remembering the order of operations is the abbreviation (or, more properly, the "acronym") "PEMDAS", which is turned into the mnemonic phrase "Please Excuse My Dear Aunt Sally". Parentheses (simplify inside 'em) Exponents Multiplication and Division (from left to right) Addition and Subtraction (from left to right) When you have a bunch of operations of the same rank, you just operate from left to right. Content Continues Below Simplify 4 + 32. Simplify 4 + (2 + 1)2. Simplify 4 + [–1(–2 – 1)]2.

6 BASIC STATISTICAL TOOLS There are lies, damn lies, and statistics......(Anon.) 6.1 Introduction 6.2 Definitions 6.3 Basic Statistics 6.4 Statistical tests 6.1 Introduction In the preceding chapters basic elements for the proper execution of analytical work such as personnel, laboratory facilities, equipment, and reagents were discussed. It was stated before that making mistakes in analytical work is unavoidable. A multitude of different statistical tools is available, some of them simple, some complicated, and often very specific for certain purposes. Clearly, statistics are a tool, not an aim. 6.2 Definitions 6.2.1 Error 6.2.2 Accuracy 6.2.3 Precision 6.2.4 Bias Discussing Quality Control implies the use of several terms and concepts with a specific (and sometimes confusing) meaning. 6.2.1 Error Error is the collective noun for any departure of the result from the "true" value*. 1. * The "true" value of an attribute is by nature indeterminate and often has only a very relative meaning. 6.2.2 Accuracy 6.2.4 Bias 1.

History of Normal Distribution History of the Normal Distribution Author(s) David M. Prerequisites Distributions, Central Tendency, Variability, Binomial Distribution In the chapter on probability, we saw that the binomial distribution could be used to solve problems such as "If a fair coin is flipped 100 times, what is the probability of getting 60 or more heads?" where x is the number of heads (60), N is the number of flips (100), and π is the probability of a head (0.5). Abraham de Moivre, an 18th century statistician and consultant to gamblers, was often called upon to make these lengthy computations. de Moivre noted that when the number of events (coin flips) increased, the shape of the binomial distribution approached a very smooth curve. Figure 1. de Moivre reasoned that if he could find a mathematical expression for this curve, he would be able to solve problems such as finding the probability of 60 or more heads out of 100 coin flips much more easily. Figure 2.

Conflict of interest The presence of a conflict of interest is independent of the occurrence of impropriety. Therefore, a conflict of interest can be discovered and voluntarily defused before any corruption occurs. A widely used definition is: "A conflict of interest is a set of circumstances that creates a risk that professional judgement or actions regarding a primary interest will be unduly influenced by a secondary interest Related to the practice of law[edit] Judicial disqualification, also referred to as recusal, refers to the act of abstaining from participation in an official action such as a court case/legal proceeding due to a conflict of interest of the presiding court official or administrative officer. In the legal profession, the duty of loyalty owed to a client prohibits an attorney (or a law firm) from representing any other party with interests adverse to those of a current client. Generally (unrelated to the practice of law)[edit] There often is confusion over these two situations.

The T-Test The t-test assesses whether the means of two groups are statistically different from each other. This analysis is appropriate whenever you want to compare the means of two groups, and especially appropriate as the analysis for the posttest-only two-group randomized experimental design. Figure 1 shows the distributions for the treated (blue) and control (green) groups in a study. What does it mean to say that the averages for two groups are statistically different? This leads us to a very important conclusion: when we are looking at the differences between scores for two groups, we have to judge the difference between their means relative to the spread or variability of their scores. The formula for the t-test is a ratio. The top part of the formula is easy to compute – just find the difference between the means. SE(XˉT​−XˉC​)=nT​varT​​+nC​varC​​​ Remember, that the variance is simply the square of the standard deviation. The final formula for the t-test is: t=nT​varT​​+nC​varC​​​XˉT​−XˉC​​

ONE-WAY ANOVA Analysis of variance (ANOVA) for comparing means of three or more variables. Use this test for comparing means of 3 or more samples/treatments, to avoid the error inherent in performing multiple t-tests Background. If we have, say, 3 treatments to compare (A, B, C) then we would need 3 separate t-tests (comparing A with B, A with C, and B with C). If we had seven treatments we would need 21 separate t-tests. Ideally, for this test we would have the same number of replicates for each treatment, but this is not essential. An important assumption underlies the Analysis of Variance: that all treatments have similar variance. Procedure (see worked example) Don't be frightened by this! Assume that we have recorded the biomass of 3 bacteria in flasks of glucose broth, and we used 3 replicate flasks for each bacterium. Step 1. Step 2. , S x2, and Sd2 (click here for method) Step 3. Step 4. Step 5. and call the sum B. Step 6. Step 7. Step 8. Step 9. Step 10. Step 11. Step 12. Step 13. Method 1.

Stats: Two-Way ANOVA The two-way analysis of variance is an extension to the one-way analysis of variance. There are two independent variables (hence the name two-way). Assumptions The populations from which the samples were obtained must be normally or approximately normally distributed. Hypotheses There are three sets of hypothesis with the two-way ANOVA. The null hypotheses for each of the sets are given below. The population means of the first factor are equal. Factors The two independent variables in a two-way ANOVA are called factors. Treatment Groups Treatement Groups are formed by making all possible combinations of the two factors. As an example, let's assume we're planting corn. The data that actually appears in the table are samples. Main Effect The main effect involves the independent variables one at a time. Interaction Effect The interaction effect is the effect that one factor has on the other factor. Within Variation The Within variation is the sum of squares within each treatment group. F-Tests

Post Hoc Tests for One-Way ANOVA Post Hoc Tests for One-Way ANOVA (Jump to: Lecture | Video ) Remember that after rejecting the null hypothesis in an ANOVA, all you know is that the groups you compared are different in some way. Imagine you performed the following experiment and ended up rejecting the null hypothesis: Researchers want to test a new anti-anxiety medication. Now that weve rejected the null hypothesis, it is appropriate to perform a Post-Hoc test to discover where the three groups are different. In this lecture, we'll be examining two different tests: Tukey HSD, and Scheffe. The Tukey test is more liberal than the Scheffe test. Tukey HSD With this test, were interested in examining mean differences. Next, we calculate our HSD (Honestly Significant Difference) Figure 4. For this equation, we need MSwithin from our ANOVA source table, as well as our n. Now, we measure how far each mean is from each other mean. Here, we find that all three means are different from one another. Scheffe Reject the null hypothesis.

Related: