1.3 Statistical Reasoning in Everyday Life

Asked about the ideal wealth distribution in America, Democrats and Republicans were surprisingly similar. In the Democrats’ ideal world, the richest 20 percent would possess 30 percent of the wealth. Republicans preferred a similar 35 percent (Norton & Ariely, 2011).

43

In descriptive, correlational, and experimental research, statistics are tools that help us see and interpret what the unaided eye might miss. Sometimes the unaided eye misses badly. Researchers Michael Norton and Dan Ariely (2011) invited 5522 Americans to estimate the percent of wealth possessed by the richest 20 percent in their country. The average person’s guess—58 percent—“dramatically underestimated” the actual wealth inequality. (The wealthiest 20 percent, they reported, possessed 84 percent of the wealth.)

When setting goals, we love big round numbers. We’re far more likely to want to lose 20 pounds than 19 or 21 pounds. We’re far more likely to retake the SAT if our verbal plus math score is just short of a big round number, such as 1200. By modifying their behavior, batters are nearly four times more likely to finish the season with a .300 average than with a .299 average (Pope & Simonsohn, 2011).

Accurate statistical understanding benefits everyone. To be an educated person today is to be able to apply simple statistical principles to everyday reasoning. One needn’t memorize complicated formulas to think more clearly and critically about data.

Off-the-top-of-the-head estimates often misread reality and then mislead the public. Someone throws out a big, round number. Others echo it, and before long the big, round number becomes public misinformation. A few examples:

The point to remember: Doubt big, round, undocumented numbers. That’s actually a lesson we intuitively appreciate, by finding precise numbers more credible (Oppenheimer et al., 2014). When U.S. Secretary of State John Kerry sought to rally American support in 2013 for a military response to Syria’s apparent use of chemical weapons, his argument gained credibility from its precision: “The United States government now knows that at least 1429 Syrians were killed in this attack, including at least 426 children.”

Statistical illiteracy also feeds needless health scares (Gigerenzer et al., 2008, 2009, 2010). In the 1990s, the British press reported a study showing that women taking a particular contraceptive pill had a 100 percent increased risk of blood clots that could produce strokes. This caused thousands of women to stop taking the pill, leading to a wave of unwanted pregnancies and an estimated 13,000 additional abortions (which also are associated with increased blood-clot risk). And what did the study find? A 100 percent increased risk, indeed—but only from 1 in 7000 to 2 in 7000. Such false alarms underscore the need to teach statistical reasoning and to present statistical information more transparently.

Describing Data

1-11 How do we describe data using three measures of central tendency, and what is the relative usefulness of the two measures of variation?

Once researchers have gathered their data, they may use descriptive statistics to organize that data meaningfully. One way to do this is to convert the data into a simple bar graph, as in FIGURE 1.8 below, which displays a distribution of different brands of trucks still on the road after a decade. When reading statistical graphs such as this, take care. It’s easy to design a graph to make a difference look big (Figure 1.8a) or small (Figure 1.8b). The secret lies in how you label the vertical scale (the y-axis).

The point to remember: Think smart. When viewing graphs, read the scale labels and note their range.

RETRIEVAL PRACTICE

Figure 1.8
Read the scale labels
  • An American truck manufacturer offered graph (a)—with actual brand names included—to suggest the much greater durability of its trucks. What does graph (b) make clear about the varying durability, and how is this accomplished?

Note how the y-axis of each graph is labeled. The range for the y-axis label in graph (a) is only from 95 to 100. The range for graph (b) is from 0 to 100. All the trucks rank as 95% and up, so almost all are still functioning after 10 years, which graph (b) makes clear.

mode the most frequently occurring score(s) in a distribution.

Measures of Central Tendency

mean the arithmetic average of a distribution, obtained by adding the scores and then dividing by the number of scores.

The next step is to summarize the data using some measure of central tendency, a single score that represents a whole set of scores. The simplest measure is the mode, the most frequently occurring score or scores. The most familiar is the mean, or arithmetic average—the total sum of all the scores divided by the number of scores. The midpoint—the 50th percentile—is the median. On a divided highway, the median is the middle. So, too, with data: If you arrange all the scores in order from the highest to the lowest, half will be above the median and half will be below it.

median the middle score in a distribution; half the scores are above it and half are below it.

Measures of central tendency neatly summarize data. But consider what happens to the mean when a distribution is lopsided, when it’s skewed by a few way-out scores. With income data, for example, the mode, median, and mean often tell very different stories (FIGURE 1.9). This happens because the mean is biased by a few extreme scores. When Microsoft co-founder Bill Gates sits down in an intimate café, its average (mean) customer instantly becomes a billionaire. But the customers’ median wealth remains unchanged. Understanding this, you can see how a British newspaper could accurately run the headline “Income for 62% Is Below Average” (Waterhouse, 1993). Because the bottom half of British income earners receive only a quarter of the national income cake, most British people, like most people everywhere, make less than the mean. Mean and median tell different true stories.

The average person has one ovary and one testicle.

Figure 1.9
A skewed distribution This graphic representation of the distribution of a village’s incomes illustrates the three measures of central tendency—mode, median, and mean. Note how just a few high incomes make the mean—the fulcrum point that balances the incomes above and below—deceptively high.

44

The point to remember: Always note which measure of central tendency is reported. If it is a mean, consider whether a few atypical scores could be distorting it.

Measures of Variation

Knowing the value of an appropriate measure of central tendency can tell us a great deal. But the single number omits other information. It helps to know something about the amount of variation in the data—how similar or diverse the scores are. Averages derived from scores with low variability are more reliable than averages based on scores with high variability. Consider a basketball player who scored between 13 and 17 points in each of the season’s first 10 games. Knowing this, we would be more confident that she would score near 15 points in her next game than if her scores had varied from 5 to 25 points.

range the difference between the highest and lowest scores in a distribution.

45

The range of scores—the gap between the lowest and highest—provides only a crude estimate of variation. A couple of extreme scores in an otherwise uniform group, such as the $950,000 and $1,420,000 incomes in Figure 1.9, will create a deceptively large range.

The more useful standard for measuring how much scores deviate from one another is the standard deviation. It better gauges whether scores are packed together or dispersed, because it uses information from each score. The computation (see TABLE 1.4 for an example) assembles information about how much individual scores differ from the mean. If your college or university attracts students of a certain ability level, their intelligence scores will have a relatively small standard deviation compared with the more diverse community population outside your school.

Table 1.4
Standard Deviation Is Much More Informative Than Mean Alone
Note that the test scores in Class A and Class B have the same mean (80), but very different standard deviations, which tell us more about how the students in each class are really faring.

standard deviation a computed measure of how much scores vary around the mean score.

normal curve (normal distribution) a symmetrical, bell-shaped curve that describes the distribution of many types of data; most scores fall near the mean (about 68 percent fall within one standard deviation of it) and fewer and fewer near the extremes.

You can grasp the meaning of the standard deviation if you consider how scores tend to be distributed in nature. Large numbers of data—heights, weights, intelligence scores, grades (though not incomes)—often form a symmetrical, bell-shaped distribution. Most cases fall near the mean, and fewer cases fall near either extreme. This bell-shaped distribution is so typical that we call the curve it forms the normal curve.

As FIGURE 1.10 shows, a useful property of the normal curve is that roughly 68 percent of the cases fall within one standard deviation on either side of the mean. About 95 percent of cases fall within two standard deviations. Thus, as Chapter 10 notes, about 68 percent of people taking an intelligence test will score within ±15 points of 100. About 95 percent will score within ±30 points.

Figure 1.10
The normal curve Scores on aptitude tests tend to form a normal, or bell-shaped, curve. For example, the most commonly used intelligence test, the Wechsler Adult Intelligence Scale, calls the average score 100.

Question

rMkaIQ7Ag1JUfxr26zYP8D3BsNXSKYT6b8FLWHJsq90XZVYWhp3pZp9LzwgOfNKiuN1JYZzyi+qKixINDzNqEXI/K/X4tCkTTHaUx7/udHVJfpZxcZNkFW9gAE8H2VX5zXL3oDgvR7iHnn675IAEWmGh9sGFqyFMtssNcHam41bpUoLLKo3dTe9nnQfx/hmg1+pmdua5Pt6ZQ03fZVyuUlZlFbjnXBvJRLLdA/+kp9yGExevojt8k0jHjXnFRturzaEoZ6VC3UYZGKhbUEWoRnWfjvXk9YSpi+TqU1zm2dnput3PA6yGMVQTaRxtLYQ28TKD8YTnnJI=
Possible sample answer: Measures of central tendency summarize the data with a single feature. They give us an average for the range of scores, but we need to know how much variation there is among the scores. Measures of variation provide this information. In particular, the standard deviation is the best gauge of whether scores are packed together or dispersed, because it uses information from each score.

46

For an interactive tutorial on these statistical concepts, visit LaunchPad’s PsychSim 6: Descriptive Statistics.

RETRIEVAL PRACTICE

  • The average of a distribution of scores is the ______________. The score that shows up most often is the ______________. The score right in the middle of a distribution (half the scores above it; half below) is the ______________. We determine how much scores vary around the average in a way that includes information about the ______________ of scores (difference between highest and lowest) by using the ______________ ______________ formula.

mean; mode; median; range; standard deviation

Significant Differences

1-12 How do we know whether an observed difference can be generalized to other populations?

Data are “noisy.” The average score in one group (children who were breast-fed as babies) could conceivably differ from the average score in another group (children who were bottle-fed as babies) not because of any real difference but merely because of chance fluctuations in the people sampled. How confidently, then, can we infer that an observed difference is not just a fluke—a chance result from the research sample? For guidance, we can ask how reliable and significant the differences are. These inferential statistics help us determine if results can be generalized to a larger population.

When Is an Observed Difference Reliable?

In deciding when it is safe to generalize from a sample, we should keep three principles in mind:

  1. Representative samples are better than biased samples. The best basis for generalizing is not from the exceptional and memorable cases one finds at the extremes but from a representative sample of cases. Research never randomly samples the whole human population. Thus, it pays to keep in mind what population a study has sampled.
  2. Less-variable observations are more reliable than those that are more variable. As we noted earlier in the example of the basketball player whose game-to-game points were consistent, an average is more reliable when it comes from scores with low variability.
  3. More cases are better than fewer. An eager prospective student visits two university campuses, each for a day. At the first, the student randomly attends two classes and discovers both instructors to be witty and engaging. At the next campus, the two sampled instructors seem dull and uninspiring. Returning home, the student (discounting the small sample size of only two teachers at each institution) tells friends about the “great teachers” at the first school, and the “bores” at the second. Again, we know it but we ignore it: Averages based on many cases are more reliable (less variable) than averages based on only a few cases.

The point to remember: Smart thinkers are not overly impressed by a few anecdotes. Generalizations based on a few unrepresentative cases are unreliable.

47

When Is an Observed Difference Significant?

Perhaps you’ve compared men’s and women’s scores on a laboratory test of aggression, and found a gender difference. But individuals differ. How likely is it that the difference you observed was just a fluke? Statistical testing can estimate that.

Here is the underlying logic: When averages from two samples are each reliable measures of their respective populations (as when each is based on many observations that have small variability), then their difference is likely to be reliable as well. (Example: The less the variability in women’s and in men’s aggression scores, the more confidence we would have that any observed gender difference is reliable.) And when the difference between the sample averages is large, we have even more confidence that the difference between them reflects a real difference in their populations.

statistical significance a statistical statement of how likely it is that an obtained result occurred by chance.

In short, when sample averages are reliable, and when the difference between them is relatively large, we say the difference has statistical significance. This means that the observed difference is probably not due to chance variation between the samples.

For a 9.5-minute video synopsis of psychology’s scientific research strategies, visit LaunchPad’s Video: Research Methods.

In judging statistical significance, psychologists are conservative. They are like juries who must presume innocence until guilt is proven. For most psychologists, proof beyond a reasonable doubt means not making much of a finding unless the odds of its occurring by chance, if no real effect exists, are less than 5 percent.

When reading about research, you should remember that, given large enough or homogeneous enough samples, a difference between them may be “statistically significant” yet have little practical significance. For example, comparisons of intelligence test scores among hundreds of thousands of first-born and later-born individuals indicate a highly significant tendency for first-born individuals to have higher average scores than their later-born siblings (Kristensen & Bjerkedal, 2007; Zajonc & Markus, 1975). But because the scores differ by only one to three points, the difference has little practical importance.

The point to remember: Statistical significance indicates the likelihood that a result will happen by chance. But this does not say anything about the importance of the result.

RETRIEVAL PRACTICE

  • Can you solve this puzzle?

The registrar’s office at the University of Michigan has found that usually about 100 students in Arts and Sciences have perfect marks at the end of their first term at the University. However, only about 10 to 15 students graduate with perfect marks. What do you think is the most likely explanation for the fact that there are more perfect marks after one term than at graduation (Jepson et al., 1983)?

Averages based on fewer courses are more variable, which guarantees a greater number of extremely low and high marks at the end of the first term.

  • ______________ statistics summarize data, while ______________ statistics determine if data can be generalized to other populations.

Descriptive; inferential

48