## Describing Data

A-1 How do we describe data using three measures of central tendency, and what is the relative usefulness of the two measures of variation?

Once researchers have gathered their data, they may use descriptive statistics to organize that data meaningfully. One way to do this is to show the data in a simple bar graph, as in FIGURE A.1 (below), which displays a distribution of different brands of trucks still on the road after a decade. When reading statistical graphs such as this one, take care. It’s easy to design a graph to make a difference look big (FIGURE A.1a) or small (FIGURE A.1b). The secret lies in how you label the vertical scale (the y-axis).

Figure 16.1: FIGURE A.1 Read the scale labels

A-2

### Question

ANSWER: Note how the y-axis of each graph is labeled. The range for the y-axis label in graph (a) is only from 95 to 100. The range for graph (b) is from 0 to 100. All the trucks rank as 95% and up, so almost all are still functioning after 10 years, which graph (b) makes clear.

The point to remember: Think smart. When viewing graphs, read the scale labels and note their range.

The average person has one ovary and one testicle.

## Measures of Central Tendency

mode the most frequently occurring score(s) in a distribution.

mean the arithmetic average of a distribution, obtained by adding the scores and then dividing by the number of scores.

median the middle score in a distribution; half the scores are above it and half are below it.

The next step is to summarize the data using some measure of central tendency, a single score that represents a whole set of scores. The simplest measure is the mode, the most frequently occurring score or scores. The most familiar is the mean, or arithmetic average—the total sum of all the scores divided by the number of scores. The midpoint—the 50th percentile—is the median. On a divided highway, the median is the middle. So, too, with data: If you arrange all the scores in order from the highest to the lowest, half will be above the median and half will be below it.

Measures of central tendency neatly summarize data. But consider what happens to the mean when a distribution is lopsided, when it’s skewed by a few way-out scores. With income data, for example, the mode, median, and mean often tell very different stories (FIGURE A.2). This happens because the mean is biased by a few extreme scores. When Microsoft co-founder Bill Gates sits down in a small café, its average (mean) customer instantly becomes a billionaire. But the median customer’s wealth remains unchanged. Understanding this, you can see why, according to the 2010 U.S. Census, nearly 65 percent of U.S. households have “below average” income. The bottom half of earners receive much less than half the national income cake. So, most Americans make less than the mean. Mean and median tell different true stories.

Figure 16.2: FIGURE A.2 A skewed distribution This graphic representation of the distribution of a village’s incomes illustrates the three measures of central tendency—mode, median, and mean. Note how just a few high incomes make the mean—the fulcrum point that balances the incomes above and below—deceptively high.

A-3

The point to remember: Always note which measure of central tendency is reported. If it is a mean, consider whether a few atypical scores could be distorting it.

## Measures of Variation

Knowing the value of an appropriate measure of central tendency can tell us a great deal. But the single number omits other information. It helps to know something about the amount of variation in the data—how similar or diverse the scores are. Averages derived from scores with low variability are more reliable than averages based on scores with high variability. Consider a basketball player who scored between 13 and 17 points in each of the season’s first 10 games. Knowing this, we would be more confident that she would score near 15 points in her next game than if her scores had varied from 5 to 25 points.

range the difference between the highest and lowest scores in a distribution.

The range of scores—the gap between the lowest and highest—provides only a crude estimate of variation. In an otherwise uniform group, a couple of extreme scores, such as the \$950,000 and \$1,420,000 incomes in FIGURE A.2, will create a deceptively large range.

standard deviation a computed measure of how much scores vary around the mean score.

The more useful standard for measuring how much scores deviate from one another is the standard deviation. It better gauges whether scores are packed together or dispersed, because it uses information from each score. The computation1 assembles information about how much individual scores differ from the mean, which can be very telling. Let’s say test scores from Class A and Class B both have the same mean (75 percent) but very different standard deviations (5.0 for Class A and 15.0 for Class B). Have you ever had test experiences like that—where two-thirds of your classmates in one course score in the 70 to 80 percent range, with scores in another course more spread out (two-thirds between 60 and 90)? The standard deviation tells us more about how each class is really faring than does the mean score alone. As another example, consider varsity and intramural sports. A school’s varsity volleyball players’ ability levels will have a relatively small standard deviation compared with the more diverse ability levels found in those playing on intramural volleyball teams.

normal curve (normal distribution) a symmetrical, bell-shaped curve that describes the distribution of many types of data; most scores fall near the mean (about 68 percent fall within one standard deviation of it) and fewer and fewer near the extremes.

You can grasp the meaning of the standard deviation if you consider how scores tend naturally to be distributed. Large numbers of data—heights, weights, intelligence scores, grades (though not incomes)—often form a symmetrical, bell-shaped distribution. Most cases fall near the mean, and fewer cases fall near either extreme. This bell-shaped distribution is so typical that we call the curve it forms the normal curve.

For an interactive tutorial on these statistical concepts, visit LaunchPad’s PsychSim 6: Descriptive Statistics.

As FIGURE A.3 (below) shows, a useful property of the normal curve is that roughly 68 percent of the cases fall within one standard deviation on either side of the mean. About 95 percent of cases fall within two standard deviations. Thus, as Chapter 9 notes, about 68 percent of people taking an intelligence test will score within ±15 points of 100. About 95 percent will score within ±30 points.

Figure 16.3: FIGURE A.3 The normal curve Scores on aptitude tests tend to form a normal, or bell-shaped, curve. For example, the most commonly used intelligence test, the Wechsler Adult Intelligence Scale, calls the average score 100.

A-4

### Question

The average of a distribution of scores is the hXH9aAVAJEv61+wB . The score that shows up most often is the q34L2M6uzyJnYKoI . The score right in the middle of a distribution (half the scores above it; half below) is the /VhoY6vV0ZDNjveu . We determine how much scores vary around the average in a way that includes information about the auepyokOGeFBSO6f of scores (difference between highest and lowest) by using the 5d+T8UPvSWoJrQLBsE2nnVlpkgcWtHXA formula.

## Correlation: A Measure of Relationships

A-2 What does it mean when we say two things are correlated?

Throughout this book, we often ask how strongly two things are related: For example, how closely related are the personality scores of identical twins? How well do intelligence test scores predict career achievement? How closely is stress related to disease?

correlation coefficient a statistical index of the relationship between two things (from −1.00 to +1.00).

scatterplot a graphed cluster of dots, each of which represents the values of two variables. The slope of the points suggests the direction of the relationship between the two variables. The amount of scatter suggests the strength of the correlation (little scatter indicates high correlation).

As we saw in Chapter 1, describing behavior is a first step toward predicting it. When naturalistic observation and surveys reveal that one trait or behavior accompanies another, we say the two correlate. A correlation coefficient is a statistical measure of relationship. In such cases, scatterplots can be very revealing.

Each dot in a scatterplot represents the values of two variables. The three scatterplots in FIGURE A.4 illustrate the range of possible correlations from a perfect positive to a perfect negative. (Perfect correlations rarely occur in the real world.) A correlation is positive if two sets of scores, such as height and weight, tend to rise or fall together.

Figure 16.4: FIGURE A.4 Scatterplots, showing patterns of correlation Correlations—abbreviated r—can range from +1.00 (scores on one measure increase in direct proportion to scores on another), to 0.00 (no relationship), to –1.00 (scores on one measure decrease precisely as scores rise on the other).

A-5

For an animated tutorial on correlations, visit LaunchPad’s Concept Practice: Positive and Negative Correlations. See also LaunchPad’s Video: Correlational Studies below for another helpful tutorial animation.

Saying that a correlation is “negative” says nothing about its strength. A correlation is negative if two sets of scores relate inversely, one set going up as the other goes down.

Statistics can help us see what the naked eye sometimes misses. To demonstrate this for yourself, try an imaginary project. You wonder if tall men are more or less easygoing, so you collect two sets of scores: men’s heights and men’s anxiety. You measure the heights of 20 men, and you have them take an anxiety test.

With all the relevant data right in front of you (TABLE A.1), can you tell whether the correlation between height and anxiety is positive, negative, or close to zero?

Comparing the columns in TABLE A.1, most people detect very little relationship between height and anxiety. In fact, the correlation in this imaginary example is positive (r = +0.63), as we can see if we display the data as a scatterplot (FIGURE A.5).

Figure 16.5: FIGURE A.5 Scatterplot for height and anxiety This display of data from 20 imagined people (each represented by a data point) reveals an upward slope, indicating a positive correlation. The considerable scatter of the data indicates the correlation is much lower than +1.00.

If we fail to see a relationship when data are presented as systematically as in TABLE A.1, how much less likely are we to notice them in everyday life? To see what is right in front of us, we sometimes need statistical illumination. We can easily see evidence of gender discrimination when given statistically summarized information about job level, seniority, performance, gender, and salary. But we often see no discrimination when the same information dribbles in, case by case (Twiss et al., 1989).

The point to remember: Correlation coefficients tell us nothing about cause and effect, but they can help us see the world more clearly by revealing the extent to which two things relate.

## Regression Toward the Mean

A-3 What is regression toward the mean?

Correlations not only make visible the relationships we might otherwise miss, they also restrain our “seeing” nonexistent relationships. When we believe there is a relationship between two things, we are likely to notice and recall instances that confirm our belief. If we believe that dreams are forecasts of actual events, we may notice and recall confirming instances more than disconfirming instances. The result is an illusory correlation.

A-6

regression toward the mean the tendency for extreme or unusual scores or events to fall back (regress) toward the average.

Illusory correlations feed an illusion of control—that chance events are subject to our personal control. Gamblers, remembering their lucky rolls, may come to believe they can influence the roll of the dice by again throwing gently for low numbers and hard for high numbers. The illusion that uncontrollable events correlate with our actions is also fed by a statistical phenomenon called regression toward the mean. Average results are more typical than extreme results. Thus, after an unusual event, things tend to return toward their average level; extraordinary happenings tend to be followed by more ordinary ones.

The point may seem obvious, yet we regularly miss it: We sometimes attribute what may be a normal regression (the expected return to normal) to something we have done. Consider two examples:

• Students who score much lower or higher on an exam than they usually do are likely, when retested, to return to their average.

• Unusual ESP subjects who defy chance when first tested nearly always lose their “psychic powers” when retested (a phenomenon parapsychologists have called the decline effect).

Failure to recognize regression is the source of many superstitions and of some ineffective practices as well. When day-to-day behavior has a large element of chance fluctuation, we may notice that others’ behavior improves (regresses toward average) after we criticize them for very bad performance, and that it worsens (regresses toward average) after we warmly praise them for an exceptionally fine performance. Ironically, then, regression toward the average can mislead us into feeling rewarded for having criticized others and into feeling punished for having praised them (Tversky & Kahneman, 1974).

The point to remember: When a fluctuating behavior returns to normal, there is no need to invent fancy explanations for why it does so. Regression toward the mean is probably at work.

“Once you become sensitized to it, you see regression everywhere.”

Psychologist Daniel Kahneman (1985)