Abstract
Though war is never a good thing, all things considered, there are times when it is arguably justified. Most obviously, providing direct military assistance to a victim of unjust aggression would constitute a rather clear case for military intervention. However, the providing of direct military assistance may in some cases be a prospect fraught with risks and dangers, rendering it politically (and possibly even morally) difficult for states to adequately justify such action. In this article I argue that autonomous weapons systems present a way past this dilemma, providing a method for delivering direct military assistance, but doing so in a way that is less politically overt and hostile than sending one’s own combat units to aid a beleaguered state. Thus, sending autonomous weapon systems (AWS) presents an additional forceful measure short of war which states may employ, adding to the political options available for combating unjust aggression, and allowing one to provide direct assistance to victim states without necessarily bringing one’s own state into the conflict. In making this argument I draw on the current Russian invasion of Ukraine as a running example.
Publisher
Institute for Ethics and Emerging Technologies
Reference57 articles.
1. Altmann, J. and Sauer, F. (2017). Autonomous weapon systems and strategic stability. Survival, 59(5):117–142.
2. Anderson, K. and Waxman, M. (2012). Law and ethics for robot soldiers. Policy Review, 176.
3. Asaro, P. (2008). How just could a robot war be. In Briggle, A., Waelbers, K., and Brey, P. A., editors, Current Issues in Computing and Philosophy, pages 50–64. Ios Press, Amsterdam, Netherlands.
4. Asaro, P. (2012). On banning autonomous weapon systems: Human rights, automation, and the dehumanization of lethal deci-sion-making. International Review of the Red Cross, 94(886):687–709.
5. Baker, D. (2022). Should We Ban Killer Robots? Polity.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献