Abstract
AbstractArtificial intelligence has been a hot topic in recent years, particularly as it relates to warfare and military operations. While rational choice approaches have been widely used to understand the causes of war, there is little literature on using the rational choice methodology to investigate the role of AI in warfare systematically. This paper aims to fill this gap by exploring how rational choice models can inform our understanding of the power and limitations of AI in warfare. This theoretical approach suggests (a) an increase in the demand for moral judgment due to a reduction in the price of AI and (b) that without a human in the AI decision-making loop, peace is impossible; the very nature of AI rules out peace through mutually assured destruction.
Publisher
Springer Science and Business Media LLC
Reference41 articles.
1. Adamsky, D.D. 2010. The culture of military innovation: The impact of cultural factors on the revolution in military affairs in Russia, the US, and Israel. Redwood City: Stanford University Press.
2. Agrawal, A., J. Gans, and A. Goldfarb. 2018. Prediction machines: The simple economics of artificial intelligence. Brighton: Harvard Business Press.
3. Basuchoudhary, A., J.T. Bang, J. David, and T. Sen. 2021. Identifying the complex causes of civil war: A machine learning approach. Berlin: Springer.
4. Basuchoudhary, A. 2024. AI and the death of Leviathan: A Hobbesian warning (Working Paper).
5. Brożek, B., and B. Janik. 2019. Can artificial intelligences be moral agents? New Ideas in Psychology. https://doi.org/10.1016/J.NEWIDEAPSYCH.2018.12.002.