Preventing the Diffusion of Disinformation on Disaster SNS by Collective Debunking with Penalties
-
Published:2024-06-20
Issue:3
Volume:36
Page:555-567
-
ISSN:1883-8049
-
Container-title:Journal of Robotics and Mechatronics
-
language:en
-
Short-container-title:JRM
Author:
Kubo Masao1, Sato Hiroshi1ORCID, Iwanaga Saori2ORCID, Yamaguchi Akihiro3
Affiliation:
1. Department of Computer Science, National Defense Academy of Japan, 1-10-20 Hashirimizu, Yokosuka, Kanagawa 239-8686, Japan 2. Japan Coast Guard Academy, 5-1 Wakaba-cho, Kure, Hiroshima 737-8512, Japan 3. Department of Information and Systems Engineering, Fukuoka Institute of Technology, 3-30-1 Wajiro-Higashi, Higashi-ku, Fukuoka 811-0295, Japan
Abstract
As online resources such as social media are increasingly used in disaster situations, confusion caused by the spread of false information, misinformation, and hoaxes has become an issue. Although a large amount of research has been conducted on how to suppress disinformation, i.e., the widespread dissemination of such false information, most of the research from a revenue perspective has been based on prisoner’s dilemma experiments, and there has been no analysis of measures to deal with the actual occurrence of disinformation on disaster SNSs. In this paper, we focus on the fact that one of the characteristics of disaster SNS information is that it allows citizens to confirm the reality of a disaster. Hereafter, we refer to this as collective debunking, and we propose a profit-agent model for it and conduct an analysis using an evolutionary game. As a result, we experimentally found that deception in the confirmation of disaster information uploaded to SNS is likely to lead to the occurrence of disinformation. We also found that if this deception can be detected and punished, for example by patrols, it tends to suppress the occurrence of disinformation.
Funder
Japan Society for the Promotion of Science
Publisher
Fuji Technology Press Ltd.
Reference32 articles.
1. L. Palen, S. Vieweg, and K. M. Anderson, “Supporting ‘everyday analysts’ in safety-and time-critical situations,” The Information Society, Vol.27, Issue 1, pp. 52-62, 2011. https://doi.org/10.1080/01972243.2011.534370 2. P. Agarwal, R. A. Aziz, and J. Zhuang, “Interplay of rumor propagation and clarification on social media during crisis events – A game-theoretic approach,” European J. of Operational Research, Vol.298, Issue 2, pp. 714-733, 2022. https://doi.org/10.1016/j.ejor.2021.06.060 3. S. T. Muhammed and S. K. Mathew, “The disaster of misinformation: a review of research in social media,” Int. J. of Data Science and Analytics, Vol.13, No.4, pp. 271-285, 2022. https://doi.org/10.1007%2Fs41060-022-00311-6 4. M. Miyabe, A. Nadamoto, and E. Aramaki, “Development of Service for Prevention of Spreading of False Rumors based on Rumor-correction Information,” Trans. of Information Processing Society of Japan, Vol.55, No.1, pp. 563-573, 2014 (in japanese). 5. M. Cheng, C. Yin, J. Zhang, S. Nazarian, J. Deshmukh, and P. Bogdan, “A general trust framework for multi-agent systems,” Proc. of the 20th Int. Conf. on Autonomous Agents and Multiagent Systems, pp. 332-340, 2021.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|