CHI-SQUARE TEST: AND THE WINNER IS?

And the Winner Is?

true

true

true

You must read each slide, and complete any questions on the slide, in sequence.

Welcome

And the Winner Is?

Authors:

Kelly M. Goedert, Seton Hall University

Susan A. Nolan, Seton Hall University

Kaylise D. Algrim, Seton Hall University

There are all kinds of awards in life. Some come from beating out the competition, and others from crossing over a finish line. But what makes an award an award? Is it the actual prize that comes with a win or merely the symbol that the award represents? One study looked at whether symbolic awards would encourage altruistic behavior even without any tangible benefit to the winner (Gallus, 2017).

John D. Buffington/DigitalVision/Getty Images

The Wikipedia platform set up a natural field experiment to see if giving out awards would make its new editors (all volunteers) more likely to keep its pages up to date with correct information—even without any personal benefit for doing so (Gallus, 2017). The Swiss national Wikipedia portal made up an honor, the “Edelweiss with Star,” for a random selection of new editors. They recognized the new editors for joining the site and for their contributions, without explicitly defining what those contributions were. Of course, the new editors did not know that their selection for the honor was completely random! The researchers examined whether new editors who received the “Edelweiss with Star” would be more likely to continue editing than those who did not. An additional twist is that on Wikipedia the volunteer editors are anonymous, so the “Edelweiss with Star” went to a username rather than the actual person. That’s not something easy to advertise offline or put on a résumé.

The award’s intention was to find out how people behaved when they believed they had been given an award without knowing the award was randomly distributed to the “winners.” The researchers examined what we’ll call “award status”—whether editors did or did not receive an award. They compared whether award recipients—what they called the “treatment group”—would be more likely to continue using the platform than those who did not receive an award—the control group. The researchers wrote: “Treatment and control groups therefore comprise 1,617 and 2,390 editors, respectively. … Thanks to the random assignment of the treatment, potential confounding variables are on average distributed equally between the treatment and control groups.”

Correct! Confounding variables are those that systematically vary with the independent variable, award status, preventing us from knowing whether it is the independent variable or the confounding variable leading that causes an effect. Random assignment helps us to manage this problem.

Actually, confounding variables are those that systematically vary with the independent variable, award status, preventing us from knowing whether it is the independent variable or the confounding variable leading that causes an effect. Random assignment helps us to manage this problem.

This graph shows the share of Wikipedia editors who were active on the site in the month after awards were issued. The researchers reported that “the diﬀerence observed in the bar chart is indeed statistically signiﬁcant … (*χ ^{2}* (1) = 18.22,

image description

The bar graphs shows whether or not a Wikipedia editor was active on the site in the month after awards were issued. The horizontal x-axis is labeled “Condition” and shows two vertical bars, one for the condition in which an editor received an award and one for the condition in which the editor received no award. The vertical y-axis is labeled “Share of group active over the next month” and runs from 0.0 on the bottom to 0.5 at the top, in intervals of 0.1. The share of the group that was active after receiving an award was 0.42, or 42%, and the share of the group active after not receiving an award was 0.35, or 35%.

Correct! We also know that this finding is statistically significant because the p value is less than the typical alpha level of 0.05.

Actually, we also know that this finding is statistically significant because the p value is less than the typical alpha level of 0.05.

Here is the same graph, depicting the share of Wikipedia editors who were active on the site in the month after awards were issued, and the same results that the researchers reported: “The diﬀerence observed in the bar chart is indeed statistically signiﬁcant … (*χ ^{2}* (1) = 18.22,

image description

The bar graphs shows whether or not a Wikipedia editor was active on the site in the month after awards were issued. The horizontal x-axis is labeled “Condition” and shows two vertical bars, one for the condition in which an editor received an award and one for the condition in which the editor received no award. The vertical y-axis is labeled “Share of group active over the next month” and runs from 0.0 on the bottom to 0.5 at the top, in intervals of 0.1. The share of the group that was active after receiving an award was 0.42, or 42%, and the share of the group active after not receiving an award was 0.35, or 35%.

Correct! We can reject the null hypothesis and conclude that editors who received an award were more likely to be active in the month following the award than those who did not receive the award. (It turns out that this finding extends for up to a year, according to the additional analyses that the researchers performed!) Note that we do not need to calculate adjusted standardized residuals in this case because there are just two groups, so we already know where the difference lies.

Actually, we can reject the null hypothesis and conclude that editors who received an award were more likely to be active in the month following the award than those who did not receive the award. (It turns out that this finding extends for up to a year, according to the additional analyses that the researchers performed!) Note that we do not need to calculate adjusted standardized residuals in this case because there are just two groups, so we already know where the difference lies.

Correct! Statistics is always about probabilities, so we never know for sure. This is a small p value, but it cannot be zero. In such cases, researchers often say p < .001 rather than p = 0.000 to highlight this fact.

Actually, statistics is always about probabilities, so we never know for sure. This is a small p value, but it cannot be zero. In such cases, researchers often say p < .001 rather than p = 0.000 to highlight this fact.

Correct! We typically include the sample size just after the degrees of freedom, and the effect size—Cramér’s V—after the p value.

Actually, we typically include the sample size just after the degrees of freedom, and the effect size—Cramér’s V—after the p value.

Correct! We need Cramér’s V to know how large the effect is. This is a large sample size (1617 editors in the award group and 2390 editors in the control group), which provides a lot of statistical power. Thus, even a small effect might be statistically significant, even if it’s not a meaningful or large difference.

Actually, we need Cramér’s V to know how large the effect is. This is a large sample size (1617 editors in the award group and 2390 editors in the control group), which provides a lot of statistical power. Thus, even a small effect might be statistically significant, even if it’s not a meaningful or large difference.

The researchers found that award winners were more likely to be active than non-award winners, not just during the month after the awards were given, but for up to a year! (And these were made-up awards with no actual prizes!) But the researchers didn’t stop with just those analyses. They reported: “Excluding the award project’s page … does not change the results. … This suggests that award recipients who post [additional information] on the project’s page also make other contributions. Additionally, excluding editors’ own pages somewhat reduces the eﬀect, but still shows a diﬀerence of ﬁve percentage points, which is highly statistically signiﬁcant” (p. 4009). So, the additional Wikipedia activity wasn’t just editors adding to their original entry or writing more about themselves—the editors evidently took on something new.

Correct! The additional analysis is a form of severe testing. The researchers wondered if the effect occurred because the award led people to simply add to the original post that won them the award, or to add to their own Wikipedia entries. So, they ruled that possibility out, making their results stronger.

Actually, the additional analysis is a form of severe testing. The researchers wondered if the effect occurred because the award led people to simply add to the original post that won them the award, or to add to their own Wikipedia entries. So, they ruled that possibility out, making their results stronger.

The bottom line: Even symbolic awards can make a difference in behavior. So, when considering how to motivate people, remember that a job well done might be its own reward, but a little recognition doesn’t hurt.

Caiaimage/Sam Edwards/Getty Images

REFERENCES

Gallus, J. (2017). Fostering public good contributions with symbolic awards: A large-scale natural field experiment at Wikipedia. *Management Science, 63*(12), 3999–4015.
https://doi.org/10.1287/mnsc.2016.2540