Resolving content moderation dilemmas between free speech and harmful misinformation

Author:

Kozyreva Anastasia1ORCID,Herzog Stefan M.1ORCID,Lewandowsky Stephan23ORCID,Hertwig Ralph1ORCID,Lorenz-Spreen Philipp1ORCID,Leiser Mark4ORCID,Reifler Jason5ORCID

Affiliation:

1. Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin 14195, Germany

2. School of Psychological Science, University of Bristol, Bristol BS8 1QU, United Kingdom

3. School of Psychological Sciences, University of Western Australia, Perth 6009, Australia

4. Amsterdam Law and Technology Institute, VU-Amsterdam, Amsterdam 1081 HV, The Netherlands

5. Department of Politics, University of Exeter, Exeter EX4 4PY, United Kingdom

Abstract

In online content moderation, two key values may come into conflict: protecting freedom of expression and preventing harm. Robust rules based in part on how citizens think about these moral dilemmas are necessary to deal with this conflict in a principled way, yet little is known about people’s judgments and preferences around content moderation. We examined such moral dilemmas in a conjoint survey experiment where US respondents ( N = 2, 564) indicated whether they would remove problematic social media posts on election denial, antivaccination, Holocaust denial, and climate change denial and whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post as well as the consequences of the misinformation. The majority preferred quashing harmful misinformation over protecting free speech. Respondents were more reluctant to suspend accounts than to remove posts and more likely to do either if the harmful consequences of the misinformation were severe or if sharing it was a repeated offense. Features related to the account itself (the person behind the account, their partisanship, and number of followers) had little to no effect on respondents’ decisions. Content moderation of harmful misinformation was a partisan issue: Across all four scenarios, Republicans were consistently less willing than Democrats or independents to remove posts or penalize the accounts that posted them. Our results can inform the design of transparent rules for content moderation of harmful misinformation.

Funder

Volkswagen Foundation

Publisher

Proceedings of the National Academy of Sciences

Subject

Multidisciplinary

Reference67 articles.

1. G. Monbiot Covid lies cost lives—we have a duty to clamp down on them (2021). https://www.theguardian.com/commentisfree/2021/jan/27/covid-lies-cost-lives-right-clamp-downmisinformation.

2. M. Zuckerberg Standing for voice and free expression (2019). https://about.fb.com/news/2019/10/mark-zuckerberg-stands-for-voice-and-free-expression/.

3. European Commission Proposal for a regulation of the European Parliament and of the Council on a single market for digital services (Digital Services Act) and amending Directive 2000/31/EC (COM/2020/825 final) (2020). https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020PC0825&from=en.

4. N. Clegg In response to Oversight Board Trump suspended for two years; will only be reinstated if conditions permit (2021). https://about.fb.com/news/2021/06/facebook-response-to-oversightboard-recommendations-trump/.

5. Twitter Permanent suspension of @realDonaldTrump (2021). https://blog.twitter.com/en_us/topics/company/2020/suspension.

Cited by 18 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3