Abstract
AbstractThis paper proposes a methodological redirection of the philosophical debate on artificial moral agency (AMA) in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and responsibility of participants. The proposal is backed up by an analysis of the AMA debate, which is found to be overly caught in the opposition between so-called standard and functionalist conceptions of moral agency, conceptually confused and practically inert. Additionally, we outline some main themes of research in need of attention in light of the suggested normative approach to AMA.
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Philosophy
Reference100 articles.
1. Adams, T. K. (2001). Future warfare and the decline of human decisionmaking. Parameters,31(4), 57–71.
2. Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence,12(3), 251–261.
3. Anderson, S. L. (2008). Asimov’s “three laws of robotics” and machine metaethics. AI & SOCIETY,22(4), 477–493.
4. Anderson, M., & Anderson, S. L. (2007). Machine ethics: Creating an ethical intelligent agent. AI Magazine,28(4), 15.
5. Anderson, M., Anderson, S. L., Armen, C. 2004. Towards machine ethics. In Proceedings of AAAI.
Cited by
40 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献