Abstract
AbstractIn the research field of machine ethics, we commonly categorize artificial moral agents into four types, with the most advanced referred to as a full ethical agent, or sometimes a full-blown Artificial Moral Agent (AMA). This type has three main characteristics: autonomy, moral understanding and a certain level of consciousness, including intentional mental states, moral emotions such as compassion, the ability to praise and condemn, and a conscience. This paper aims to discuss various aspects of full-blown AMAs and presents the following argument: the creation of full-blown artificial moral agents, endowed with intentional mental states and moral emotions, and trained to align with human values, does not, by itself, guarantee that these systems will have human morality. Therefore, it is questionable whether they will be inclined to honor and follow what they perceive as incorrect moral values. we do not intend to claim that there is such a thing as a universally shared human morality, only that as there are different human communities holding different sets of moral values, the moral systems or values of the discussed artificial agents would be different from those held by human communities, for reasons we discuss in the paper.
Publisher
Springer Science and Business Media LLC
Reference58 articles.
1. Alexander RD (1987) The biology of moral systems. Routledge
2. Allen C, Wallach W (2011) Moral machines: contradiction in terms, or abdication of human responsibility? In: Lin P, Abney K, Bekey G (eds) Robot ethics: the ethical and social implications of robotics. MIT Press, Cambridge, pp 55–68
3. Ashford E, Mulgan T (2018) Contractualism. The Stanford Encyclopedia of Philosophy. In: Zalta EN (ed) https://plato.stanford.edu/archives/sum2018/entries/contractualism
4. Behdadi D, Munthe C (2020) A normative approach to artificial moral Agency. Mind Mach 30:195–218. https://doi.org/10.1007/s11023-020-09525-8
5. Block N (2002) The harder problem of consciousness. J Philos 99(8):391–425