Abstract
AbstractArtificial intelligence (AI) has found extensive applications to varying degrees across diverse domains, including the possibility of using it within military contexts for making decisions that can have moral consequences. A recurring challenge in this area concerns the allocation of moral responsibility in the case of negative AI-induced outcomes. Some scholars posit the existence of an insurmountable “responsibility gap”, wherein neither the AI system nor the human agents involved can or should be held responsible. Conversely, other scholars dispute the presence of such gaps or propose potential solutions. One solution that frequently emerges in the literature on AI ethics is the concept of command responsibility, wherein human agents may be held responsible because they perform a supervisory role over the (subordinate) AI. In the article we examine the compatibility of command responsibility in light of recent empirical studies and psychological evidence, aiming to anchor discussions in empirical realities rather than relying exclusively on normative arguments. Our argument can be succinctly summarized as follows: (1) while the theoretical foundation of command responsibility appears robust (2) its practical implementation raises significant concerns, (3) yet these concerns alone should not entirely preclude its application (4) they underscore the importance of considering and integrating empirical evidence into ethical discussions.
Publisher
Springer Science and Business Media LLC
Reference74 articles.
1. Asaro, P.: On banning autonomous weapon systems: Human rights, automation, and the dehumanization of lethal decision-making. Int. Rev. Red Cross. 94(886), 687–709 (2012). https://doi.org/10.1017/S1816383112000768
2. Dremliuga, R.: General legal limits of the application of the Lethal Autonomous weapons systems within the Purview of International Humanitarian Law. J. Politics Law. 13(2), 115 (2020). https://doi.org/10.5539/jpl.v13n2p115
3. Egeland, K.: Lethal Autonomous Weapon Systems under International Humanitarian Law. Nordic J. Int. Law. 85(2), 89–118 (2016). https://doi.org/10.1163/15718107-08502001
4. Grand-Clément, S.: Artificial Intelligence Beyond Weapons: Application and impact of AI in the military domain. UNIDIR. (2023). https://unidir.org/publication/artificial-intelligence-beyond-weapons-application-and-impact-of-ai-in-the-military-domain/
5. Van Severen, S., Vander Maelen, C.: Killer robots: Lethal autonomous weapons and international law. In J. de Bruyne & C. Vanleenhove (Eds.), Artificial intelligence and the law (pp. 151–172). Intersentia. (2021)