APPENDIX A: Statistics Chapter Summary

What tabular methods do researchers use to organize and summarize data?

One such method is a tool known as a frequency distribution table, which lists all of the scores, as well as the frequency of those scores. Frequency distribution tables enable researchers to see which scores occur most often.

What graphical methods do researchers use to organize and summarize data?

Researchers create histograms and frequency polygons to graphically depict the frequencies of scores. For both types of graphs, the scores are depicted on the horizontal or x-axis, and the frequency of each of these scores is depicted on the vertical or y-axis. A histogram is a frequency distribution graph in which bars are used to represent the frequency of scores. Frequency polygons provide the same information as a histogram, but with dots and lines rather than with bars.

What statistical measures do researchers use to find the most representative score of a distribution?

Researchers use the mean, median, and mode—three measures of central tendency—to describe the most representative score of a distribution. The mean is the average score; the median is the middlemost score; and the mode is the most frequently occurring score.

When is it appropriate to use the mean, median, or mode as a measure of central tendency?

The appropriateness depends on the shape of the distribution. In a normal distribution, the mean, mode, and median equal each other and are equally good indicators of the most representative score. For distributions that are negatively or positively skewed, the median or mode is preferable to the mean.

What statistical measures do researchers use to describe the variability of a distribution?

Researchers use the range and the standard deviation. The range is found by subtracting the lowest score from the highest score. The standard deviation is the average distance of the scores from the mean. It is found by subtracting the mean from each of the scores, squaring those deviations, and finding the average of those deviations. The standard deviation is often a more accurate measure of the variability of a distribution.

How do statistics enable researchers to compare scores from different distributions?

Researchers make scores comparable to each other by standardizing them. One statistic they use to standardize scores is the z-score, which is calculated by finding the distance of any raw score from the mean of its distribution and dividing it by the standard deviation of that distribution. Z-scores enable researchers to compare scores from different distributions according to how far above or below the mean of their respective distributions they are.

How do researchers determine whether variables are correlated?

Researchers use a graphical tool called the scatterplot and a statistical tool known as the correlation coefficient. A scatterplot is a graph that depicts the relation between two variables by plotting the point at which people’s scores on two variables intersect. If the scores appear to gather along an imaginary diagonal line, the variables are probably correlated. A correlation coefficient is a statistic that more accurately indicates the strength and direction of the correlation between variables. The strength of the relationship between two variables is indicated by the magnitude of the correlation coefficient (how far away it is from 0). The direction of the relationship is indicated by whether the coefficient is positive or negative. When two variables are positively correlated, an increase in one of the variables corresponds to an increase in the other and the coefficient will be positive. When variables are negatively correlated, an increase in one of the variables corresponds to a decrease in the other and the coefficient will be negative.

B-50

What is the value of the experimental method for testing cause–effect relations?

The experimental method uses random assignment, which enables researchers to hold constant or control for any pre-existing differences between groups. It also enables the researcher to control for other differences so that the only factor that will differ between conditions is the manipulation of the independent variable. When differences are observed between conditions, we are confident that they were due to the manipulation.

How do researchers determine whether the differences they observe in an experiment are due to the effect of their manipulation or due to chance?

Researchers use inferential statistics to calculate the probability that the differences they observed were due to chance. Researchers begin with the null hypothesis— a hypothesis that states that any relationship observed between variables is due to chance—though they are actually interested in the alternative hypothesis, which states that there will be a relationship between observed variables. In explaining relationships between variables, researchers hope to rule out sampling error—chance differences between samples and populations. Psychologists use the distribution of (all possible) sample means to help them draw inferences about the probability of their particular data, if in fact the null hypothesis is true. In doing so, they are engaging in null hypothesis significance testing—a statistical procedure that reports the probability that a research result could have occurred by chance alone. If data are extremely unlikely under the null hypothesis, psychologists conclude that their findings were statistically significant.

How do researchers determine just how meaningful their results are?

Statistical significance tells you that your observed differences were not due to chance, but it doesn’t indicate how big the effect was, so researchers calculate effect size—a measure of the size of the effect that takes into account the consistency of the effect and that ignores sample size. Though there are criteria for deciding whether the effect size is small, medium, or large, it is ultimately up to the researcher to determine the meaningfulness of the findings.