Adversarial Dynamics in Centralized Versus Decentralized Intelligent Systems

Author:

Brinkmann Levin1,Cebrian Manuel23,Pescetelli Niccolò45

Affiliation:

1. Center for Humans and Machines Max Planck Institute for Human Development

2. Department of Statistics Universidad Carlos III de Madrid

3. UC3M‐Santander Big Data Institute Universidad Carlos III de Madrid

4. Department of Humanities and Social Sciences New Jersey Institute of Technology

5. PSi, People Supported Technologies Ltd

Abstract

AbstractArtificial intelligence (AI) is often used to predict human behavior, thus potentially posing limitations to individuals’ and collectives’ freedom to act. AI's most controversial and contested applications range from targeted advertisements to crime prevention, including the suppression of civil disorder. Scholars and civil society watchdogs are discussing the oppressive dangers of AI being used by centralized institutions, like governments or private corporations. Some suggest that AI gives asymmetrical power to governments, compared to their citizens. On the other hand, civil protests often rely on distributed networks of activists without centralized leadership or planning. Civil protests create an adversarial tension between centralized and decentralized intelligence, opening the question of how distributed human networks can collectively adapt and outperform a hostile centralized AI trying to anticipate and control their activities. This paper leverages multi‐agent reinforcement learning to simulate dynamics within a human–machine hybrid society. We ask how decentralized intelligent agents can collectively adapt when competing with a centralized predictive algorithm, wherein prediction involves suppressing coordination. In particular, we investigate an adversarial game between a collective of individual learners and a central predictive algorithm, each trained through deep Q‐learning. We compare different predictive architectures and showcase conditions in which the adversarial nature of this dynamic pushes each intelligence to increase its behavioral complexity to outperform its counterpart. We further show that a shared predictive algorithm drives decentralized agents to align their behavior. This work sheds light on the totalitarian danger posed by AI and provides evidence that decentrally organized humans can overcome its risks by developing increasingly complex coordination strategies.

Funder

Max-Planck-Gesellschaft

Publisher

Wiley

Subject

Artificial Intelligence,Cognitive Neuroscience,Human-Computer Interaction,Linguistics and Language,Experimental and Cognitive Psychology

Reference58 articles.

1. Discrimination through optimization: How Facebook's ad delivery can lead to biased outcomes;Ali M.;Proceedings of the ACM on Human‐Computer Interaction,2019

2. Angwin J. Larson J. &Mattu S.(2016).Machine bias. Ethics of data and analytics.https://doi.org/10.1201/9781003278290-37

3. Baker B. Kanitscheider I. Markov T. Wu Y. Powell G. McGrew B. &Mordatch I.(2019).Emergent tool use from multi‐agent autocurricula. arXiv.http://arxiv.org/abs/1909.07528

4. Bommasani R. Hudson D. A. Adeli E. Altman R. Arora S. vonArx S. Bernstein M. S. Bohg J. Bosselut A. Brunskill E. Brynjolfsson E. Buch S. Card D. Castellon R. Chatterji N. Chen A. Creel K. Davis J. Q. Demszky D. …Liang P.(2021).On the opportunities and risks of foundation models. arXiv.https://arxiv.org/abs/2108.07258

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3