Abstract
AbstractThis paper sketches elements of a theory of the ethics of autonomised harming: the phenomenon of delegating decisions about whether and whom to harm to artificial intelligence (AI) in self-driving cars and autonomous weapon systems. First, the paper elucidates the challenge of integrating non-human, artificial agents, which lack rights and duties, into our moral framework which relies on precisely these notions to determine the permissibility of harming. Second, the paper examines how potential differences between human agents and non-human, artificial agents might bear on the permissibility of delegating life-and death decisions to AI systems. Third, and finally, the paper explores a series of resulting complexities. These include the challenge of weighing autonomous systems’ promise to reduce harm against the intrinsic value of rectificatory justice as well as the peculiar possibility that delegating harmful acts to AI might render ordinarily impermissible acts permissible. By illuminating what happens when we extend normative theory beyond its traditional boundaries, this discussion offers a starting point for assessing the moral permissibility of delegating consequential decisions to non-human, artificial agents.
Funder
Edmond J. Safra Center for Ethics, Harvard University
Stanford University
Publisher
Springer Science and Business Media LLC