Affiliation:
1. Dept. of Informatics, King’s College London, London, United Kingdom
Abstract
Self-governing hybrid societies are multi-agent systems where humans and machines interact by adapting to each other’s behaviour. Advancements in Artificial Intelligence (AI) have brought an increasing hybridisation of our societies, where one particular type of behaviour has become more and more prevalent, namely deception. Deceptive behaviour as the propagation of disinformation can have negative effects on a society’s ability to govern itself. However, self-governing societies have the ability to respond to various phenomena. In this article, we explore how they respond to the phenomenon of deception from an evolutionary perspective considering that agents have limited adaptation skills. Will hybrid societies fail to govern deceptive behaviour and reach a Tragedy of The Digital Commons? Or will they manage to avoid it through cooperation? How resilient are they against large-scale deceptive attacks? We provide a tentative answer to some of these questions through the lens of evolutionary agent-based modelling, based on the scientific literature on deceptive AI and public goods games.
Funder
Royal Academy of Engineering and the Office of the Chief Science Adviser for National Security
Publisher
Association for Computing Machinery (ACM)