Abstract
AbstractAuthoritarian regimes’ unrestricted collection of citizens’ data might constitute an advantage regarding the development of some types of AI, and AI might facilitate authoritarian practices. This feedback loop challenges democracies. In a critical continuation of the Pentagon’s Third Offset Strategy, I investigate a possible Democratic Offset regarding military applications of AI focussed on contestation, deliberation, and participation. I apply Landemore’s Open Democracy, Hildebrandt’s Agonistic Machine Learning, and Sharp’s Civilian-Based Defence. Discussing value pluralism in AI ethics, I criticise parts of the literature for leaving the fundamental ethical incompatibility of democracies and authoritarian regimes unaddressed. I am focussing on the duty to disobey illegal orders derived from customary international humanitarian law (IHL) and the standard of ‘meaningful human control’, which is central to the partially outdated debate about lethal autonomous weapon systems (LAWS). I criticize the standard of ‘meaningful human control’ following two pathways: First, the ethical and legal principles of just war theory and IHL should be implemented in military applications of AI to submit human commands to more control, in the sense of technological disaffordances. Second, the debate should focus on the societal circumstances for personal responsibility and disobedience to be trained and exerted in deliberation and participation related to military applications of AI, in the sense of societal affordances. In a larger picture, this includes multi-level stakeholder involvement, robust documentation to facilitate auditing, civilian-based defence in decentralized smart cities, and open-source intelligence. This multi-layered approach fosters cognitive diversity, which might constitute a strategic advantage for democracies regarding AI.
Publisher
Springer Science and Business Media LLC
Subject
General Earth and Planetary Sciences
Reference119 articles.
1. Chen, H.:“‘Artificial intelligence: disruptively changing the rules of the game’ (人工智能: 颠覆性改变‘游戏规则’),” China Military Online, Mar. 18, 2016. http://www.81.cn/jskj/2016-03/18/content_6966873_2.htm (accessed Sep. 13, 2022).
2. Wallace, R.: How AI founders on adversarial landscapes of fog and friction. J. of Def. Model. Simul. 19(3), 519–538 (2022). https://doi.org/10.1177/1548512920962227
3. Yan, G.: The impact of artificial intelligence on hybrid warfare. Small Wars Insurgencies 31(4), 898–917 (2020). https://doi.org/10.1080/09592318.2019.1682908
4. Johnson, J.: Artificial intelligence & future warfare: implications for international security. Def. Secur. Anal. 35(2), 147–169 (2019). https://doi.org/10.1080/14751798.2019.1600800
5. Kania, EB.:“Battlefield Singularity: artificial intelligence, military revolution, and China’s future military power,” Center for a New American Security, Nov. 2017.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Thou Shall Not Kill;Advances in Human Services and Public Health;2024-03-06