Abstract
This paper addresses the ethical challenges raised by the use of lethal autonomous weapons systems. Using aspects of the philosophy of Martin Heidegger, the paper demonstrates that lethal autonomous weapons systems create ethical problems because of the lack of moral agency in an autonomous system, and the inauthentic nature of the deaths caused by such a system. The paper considers potential solutions for these issues before arguing that from a Heideggerian standpoint they cannot be overcome, and thus the development and use of lethal autonomous weapons systems should be resisted and prohibited.
Publisher
National Documentation Centre (EKT)
Reference13 articles.
1. Bonnefon, Jean-François. “Trusting Self-Driving Cars Is Going to Be a Big Step for People.” Interview by Jonathan O’Callaghan. Horizon Magazine, April 2, 2019, https://ec.europa.eu/research-and-innovation/en/horizon-magazine/trusting-self-driving-cars-going-be-big-step-people.
2. Brayford, Kieran M. “Autonomous Weapons Systems and the Necessity of Interpretation: What Heidegger Can Tell Us About Automated Warfare.” AI and Society (2022): 1-9. doi: https://doi.org/10.1007/s00146-022-01586-w.
3. Dreyfuss, Hubert. What Computers Still Can’t Do: A Critique of Artificial Reason. Cambridge, MA: MIT Press, 2009.
4. Faye, Emmanuel. Heidegger: The Introduction of Nazism into Philosophy in Light of the Unpublished Seminars of 1933-1935. New Haven, CT: Yale University Press, 2011.
5. Fisher, Roger. “Preventing Nuclear War.” Bulletin of the Atomic Scientists 37, no. 3 (1981): 11-17. doi: https://doi.org/10.1080/00963402.1981.11458828.