17.5 Review of Concepts
Nonparametric Statistics
Nonparametric hypothesis tests are used when a study’s design does not meet the assumptions of a parametric test. This often occurs when we have a nominal or ordinal dependent variable, or a small sample in which the data suggest a skewed population distribution. Given the choice, we should use a parametric test because these tests tend to have more statistical power and because we can more frequently calculate confidence intervals and effect sizes for parametric hypothesis tests.
Chi-Square Tests
We use the chi-square test for goodness of fit when we have only one variable and it is nominal. We use the chi-square test for independence when we have two nominal variables; typically, for the purposes of articulating hypotheses, one variable is thought of as the independent variable and the other is thought of as the dependent variable. With both chi-square tests, we analyze whether the data that we observe match what we would expect according to the null hypothesis. Both tests use the same basic six steps of hypothesis testing that we learned previously.
Beyond Hypothesis Testing
We usually calculate an effect size as well; the most commonly calculated effect size with chi square is Cramer’s V, also called Cramer’s phi. We can also create a graph that depicts the conditional proportions of an outcome for each group. Alternately, we can calculate relative risk (relative likelihood/chance) to more easily compare the rates of certain outcomes in each of two groups. As with ANOVA, when we reject the null hypothesis with a chi-square hypothesis test, we do not know which cells have observed frequencies that are farther from their expected frequencies than would occur if the two variables were independent. We can determine this by calculating adjusted standardized residuals, the distances of the observed frequencies from their corresponding expected frequencies in terms of standard errors.