Affiliation:
1. The University of Queensland, Australia
Abstract
Misinformation has been rapidly spreading online. The common approach to dealing with it is deploying expert fact-checkers who follow forensic processes to identify the veracity of statements. Unfortunately, such an approach does not scale well. To deal with this, crowdsourcing has been looked at as an opportunity to complement the work done by trained journalists. In this article, we look at the effect of presenting the crowd with evidence from others while judging the veracity of statements. We implement variants of the judgment task design to understand whether and how the presented evidence may or may not affect the way crowd workers judge truthfulness and their performance. Our results show that, in certain cases, the presented evidence and the way in which it is presented may mislead crowd workers who would otherwise be more accurate if judging independently from others. Those who make appropriate use of the provided evidence, however, can benefit from it and generate better judgments.
Funder
ARC Discovery Project
ARC Training Centre for Information Resilience
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Science Applications,General Business, Management and Accounting,Information Systems
Reference45 articles.
1. Scaling up fact-checking using the wisdom of crowds
2. Omar Alonso, Daniel E. Rose, and Benjamin Stewart. 2008. Crowdsourcing for relevance evaluation. In ACM SIGIR Forum, Vol. 42. ACM, New York, NY, USA, 9–15.
3. Bias on the web
4. David La Barbera, Kevin Roitero, Gianluca Demartini, Stefano Mizzaro, and Damiano Spina. 2020. Crowdsourcing truthfulness: The impact of judgment scale and assessor bias. In ECIR. Springer, 207–214.
5. Joseph Chee Chang, Saleema Amershi, and Ece Kamar. 2017. Revolt: Collaborative crowdsourcing for labeling machine learning datasets. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 2334–2346.