Did a bot eat your homework? An assessment of the potential impact of bad actors in online administration of preference surveys

Author:

Gonzalez Juan MarcosORCID,Grover Kiran,Leblanc Thomas W.,Reeve Bryce B.

Abstract

Background Online administration of surveys has a number of advantages but can also lead to increased exposure to bad actors (human and non-human bots) who can try to influence the study results or to benefit financially from the survey. We analyze data collected through an online discrete-choice experiment (DCE) survey to evaluate the likelihood that bad actors can affect the quality of the data collected. Methods We developed and fielded a survey instrument that included two sets of DCE questions asking respondents to select their preferred treatments for multiple myeloma therapies. The survey also included questions to assess respondents’ attention while completing the survey and their understanding of the DCE questions. We used a latent-class model to identify a class associated with perverse preferences or high model variance, and the degree to which the quality checks included in the survey were correlated with class membership. Class-membership probabilities for the problematic class were used as weights in a random-parameters logit to recover population-level estimates that minimizes exposure to potential bad actors. Results Results show a significant proportion of respondents provided answers with a high degree of variability consistent with responses from bad actors. We also found that a wide-ranging selection of conditions in the survey screener is more consistent with choice patterns expected from bad actors looking to qualify for the study. The relationship between the number of incorrect answers to comprehension questions and problematic choice patterns peaked around 5 out of 10 questions. Conclusions Our results highlight the need for a robust discussion around the appropriate way to handle bad actors in online preference surveys. While exclusion of survey respondents must be avoided under most circumstances, the impact of “bots” on preference estimates can be significant.

Funder

Amgen

Publisher

Public Library of Science (PLoS)

Subject

Multidisciplinary

Reference37 articles.

1. Conducting online surveys;M. Van Selm;Quality and quantity,2006

2. A primer for conducting survey research using MTurk: Tips for the field;S. Chambers;International Journal of Adult Vocational Education and Technology (IJAVET),2016

3. Introduction to psychological experiments on the internet;M.H. Birnbaum;Psychological experiments on the Internet,2000

4. Finding the signal in the noise: Minimizing responses from bots and inattentive humans in online research;C. Yarrish;The Behavior Therapist,2019

5. Patient preferences for ketamine-based antidepressant treatments in treatment-resistant depression: Results from a clinical trial and panel;A.O. Fairchild;Neurology, Psychiatry and Brain Research,2020

Cited by 3 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3