Author:
Berry Christopher,Burton Scot
Abstract
The use of crowdsourced data has become extremely popular in marketing and public policy research. However, there are concerns about the validity of studies that source data from crowdsourcing platforms such as Amazon Mechanical Turk (MTurk). Using five different online sample sources, including multiple MTurk samples and professionally managed panels, the authors address issues related to online data quality and its effects on results for a policy-based 2 × 2 between-subjects experiment. They show that survey response satisficing, as well as multitasking, is related to attention check performance measures beyond demographic differences, and there are substantial differences across the five different online data sources. The authors specifically identify segments of high and low response satisficers using a multi-item measure and show that there are critical differences in the policy-relevant results of the experiment for these segments of online respondents. Findings suggest implications for concerns about failures to replicate results in the policy and consumer well-being, business, and social science literatures. The authors offer some suggestions for attempting to reduce problematic effects of response satisficing and data quality that are shown to differ substantially across the sample sources examined.