Affiliation:
1. Universidade Federal Fluminense (UFF)
2. Nuclear Engineering Institute
3. Universidade Federal do Rio de Janeiro (UFRJ)
Abstract
Abstract
This study aims to explain how social media (SM) users, whilst searching for information, can be trapped into a quagmire of misinformation, even when they have no denialist inclinations or sympathy for hate groups. We analyze the interactions between cognitive biases and deep preference learning algorithms (DPL), as SM companies use DPL to curate the content conveyed to its users. The study proposes a model for users’ behavior and explain how SM business model allows new information to be introduced in the quagmire in order to change user’s opinions in a way desired by a customer willing to pay for, and, eventually, accomplished it. The model explains why some popular tactics against misinformation, as censorship and fact-checking, achieve very poor results. We suggest that policies promoting face-to-face encounters in friendly environments can be more effective in that struggle. We believe the model can help decision makers in developing more efficient anti-disinformation policies.
Publisher
Research Square Platform LLC
Reference28 articles.
1. Screw Those Guys: Polarization, Empathy, and Attitudes About Out-Partisans;Allamong MB;Political Psychology,2020
2. Baron J (2000) Thinking and Deciding. 3rd ed. New York: Cambridge University Press.
3. Bias in algorithmic filtering and personalization;Bozdag E;Ethics and Information Technology,2013
4. Can Cascades be Predicted;Cheng J;WWW.DOI: https://doi.,2014
5. Da Empoli G (2019) Os engenheiros do caos: Como as fake news, as teorias da conspiração e os algoritmos estão sendo utilizados para disseminar ódio, medo e influenciar eleições. Vestígio Editora.