Affiliation:
1. Affective Brain Lab, Department of Experimental Psychology, University College London
2. The Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London
3. Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
Abstract
The powerful allure of social media platforms has been attributed to the human need for social rewards. Here, we demonstrate that the spread of misinformation on such platforms is facilitated by existing social ‘carrots’ (e.g., ‘likes’) and ‘sticks’ (e.g., ‘dislikes’) that are dissociated from the veracity of the information shared. Testing 951 participants over six experiments, we show that a slight change to the incentive structure of social media platforms, such that social rewards and punishments are contingent on information veracity, produces a considerable increase in the discernment of shared information. Namely, an increase in the proportion of true information shared relative to the proportion of false information shared. Computational modeling (i.e., drift-diffusion models) revealed the underlying mechanism of this effect is associated with an increase in the weight participants assign to evidence consistent with discerning behavior. The results offer evidence for an intervention that could be adopted to reduce misinformation spread, which in turn could reduce violence, vaccine hesitancy and political polarization, without reducing engagement.
Publisher
eLife Sciences Publications, Ltd
Subject
General Immunology and Microbiology,General Biochemistry, Genetics and Molecular Biology,General Medicine,General Neuroscience
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献