Affiliation:
1. University of Oxford , UK
Abstract
Abstract
Some take seriously the possibility of artificial intelligence (AI) takeover, where AI systems seize power in a way that leads to human disempowerment. Assessing the likelihood of takeover requires answering empirical questions about the future of AI technologies and the context in which AI will operate. In many cases, philosophers are poorly placed to answer these questions. However, some prior questions are more amenable to philosophical techniques. What does it mean to speak of AI empowerment and human disempowerment? And what empirical claims must hold for the former to lead to the latter? In this paper, I address these questions, providing foundations for further evaluation of the likelihood of takeover.
Publisher
Oxford University Press (OUP)
Reference44 articles.
1. Rethinking Power;Allen;Hypatia,1998
2. Agent57: Outperforming the Atari Human Benchmark;Badia;Proceedings of the 37th International Conference on Machine Learning,2020
3. Managing AI Risks in an Era of Rapid Progress;Bengio,2023
4. The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents;Bostrom;Minds and Machines,2012