Abstract
AbstractIn this paper we introduce a computational control framework that can keep AI-driven military autonomous devices operating within the boundaries set by applicable rules of International Humanitarian Law (IHL) related to targeting. We discuss the necessary legal tests and variables, and introduce the structure of a hypothetical IHL-compliant targeting system.
Funder
Nederlandse Organisatie voor Wetenschappelijk Onderzoek
Publisher
Springer Science and Business Media LLC
Subject
Library and Information Sciences,Computer Science Applications
Reference65 articles.
1. Defense Science Board: Memorandum. In: Defense science board (ed.) The role of autonomy in DoD systems. Department of Defense. (2012)
2. Ministère des Armées. (2019). L’intelligence artificielle au service de la défense. Ministère des Armées, Technical report.
3. Thurnher, J. S. (2018). Feasible precautions in attack and autonomous weapons. In W. H. von Heinegg, R. Frau, & T. Singer (Eds.), Dehumanization of warfare: Legal implications of new weapon technologies (pp. 99–117). Springer.
4. Abaimov, S., & Martellini, M. (2020). Artificial intelligence in autonomous weapon systems. In M. Martellini & T. Ralf (Eds.), 21st Century prometheus managing CBRN safety and security affected by cutting-edge technologies (pp. 141–177). Springer.
5. Chengeta, T. (2016). Accountability gap: Autonomous Weapon systems and modes of responsibility in international law. Denver Journal of International Law & Policy, 45, 1–50.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Explainable AI in Military Training Applications;Advances in Explainable AI Applications for Smart Cities;2024-01-18
2. Military robots should not look like a humans;Ethics and Information Technology;2023-08-17
3. THE LEGAL REVIEW OF WEAPONS AND THE AUTONOMOUS WEAPON SYSTEMS;Düşünce Dünyasında Türkiz;2023-04-25