Abstract
AbstractThe increasing prevalence of autonomously operating artificial agents has created the desire and arguably need to equip such agents with moral capabilities. A potential tool to morally sanction an artificial agent as admissible for its tasks is to apply a so-called moral Turing test (MTT) to the machine. The MTT can be supported by a pragmatist metaethics as an iteratively applied and modified procedure. However, this iterative, experimentalist procedure faces a dilemma due to the problem of technological entrenchment. I argue that, at least in certain important domains of application, the justification of artificial moral agents requires their deployment, which may entrench them and thereby undermine the justificatory process by hindering its further iteration.
Funder
Friedrich-Alexander-Universität Erlangen-Nürnberg
Publisher
Springer Science and Business Media LLC
Reference25 articles.
1. Allen, C., Smit, I., & Wallach, W. (2005). Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics and Information Technology, 7(3), 149–155.
2. Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12(3), 251–261.
3. Anderson, M., Anderson, S. L., & Armen, C. (2006). MedEthEx: A prototype medical ethics advisor. In Proceedings of the National Conference on Artificial Intelligence (vol. 21, p. 1759). Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999.
4. Arkin, R. C. (2008). Governing lethal behavior: Embedding ethics in a hybrid deliberative/reactive robot architecture. In Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction (pp. 121–128).
5. Arnold, T., & Scheutz, M. (2016). Against the moral Turing test: Accountable design and the moral reasoning of autonomous systems. Ethics and Information Technology, 18(2), 103–115.