Abstract
The strategy method is often used in public goods games to measure an individual’s willingness to cooperate depending on the level of cooperation by their groupmates (conditional cooperation). However, while the strategy method is informative, it risks conflating confusion with a desire for fair outcomes, and its presentation may risk inducing elevated levels of conditional cooperation. This problem was highlighted by two previous studies which found that the strategy method could also detect equivalent levels of cooperation even among those grouped with computerized groupmates, indicative of confusion or irrational responses. However, these studies did not use large samples (n = 40 or 72) and only made participants complete the strategy method one time, with computerized groupmates, preventing within-participant comparisons. Here, in contrast, 845 participants completed the strategy method two times, once with human and once with computerized groupmates. Our research aims were twofold: (1) to check the robustness of previous results with a large sample under various presentation conditions; and (2) to use a within-participant design to categorize participants according to how they behaved across the two scenarios. Ideally, a clean and reliable measure of conditional cooperation would find participants conditionally cooperating with humans and not cooperating with computers. Worryingly, only 7% of participants met this criterion. Overall, 83% of participants cooperated with the computers, and the mean contributions towards computers were 89% as large as those towards humans. These results, robust to the various presentation and order effects, pose serious concerns for the measurement of social preferences and question the idea that human cooperation is motivated by a concern for equal outcomes.
Subject
Applied Mathematics,Statistics, Probability and Uncertainty,Statistics and Probability
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献