Abstract
Recent accounts from researchers, journalists, as well as federal investigators, reached a unanimous conclusion: social media are systematically exploited to manipulate and alter public opinion. Some disinformation campaigns have been coordinated by means of bots, social media accounts controlled by computer scripts that try to disguise themselves as legitimate human users. In this study, we describe one such operation that occurred in the run up to the 2017 French presidential election. We collected a massive Twitter dataset of nearly 17 million posts, posted between 27 April and 7 May 2017 (Election Day). We then set to study the MacronLeaks disinformation campaign: By leveraging a mix of machine learning and cognitive behavioral modeling techniques, we separated humans from bots, and then studied the activities of the two groups independently, as well as their interplay. We provide a characterization of both the bots and the users who engaged with them, and oppose it to those users who didn’t. Prior interests of disinformation adopters pinpoint to the reasons of scarce success of this campaign: the users who engaged with MacronLeaks are mostly foreigners with pre-existing interest in alt-right topics and alternative news media, rather than French users with diverse political views. Concluding, anomalous account usage patterns suggest the possible existence of a black market for reusable political disinformation bots.
Publisher
University of Illinois Libraries
Subject
Computer Networks and Communications,Human-Computer Interaction
Cited by
150 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献