Affiliation:
1. Lomonosov Moscow State University
Abstract
The question of whether an artificial moral agent (AMA) is possible implies discussion of a whole range of problems raised by Kant within the framework of practical philosophy that have not exhausted their heuristic potential to this day. First, I show the significance of the correlation between moral law and freedom. Since a rational being believes that his/her will is independent of external influences, the will turns out to be governed by the moral law and is autonomous. Morality and freedom are correlated through independence from the external. Accordingly, if the actions of artificial intelligence (AI) are determined by something or someone external to it (by a human), then it does not act morally and freely, but heteronomously. As a consequence of AI’s lack of autonomy, and thus lack of access to the moral law, it does not and cannot have a moral understanding that proceeds from the moral law. Another consequence is that it has no sense of duty, which would follow from the moral law. Thus, moral action becomes impossible for the AMA because it lacks autonomy and moral law, moral understanding and sense of duty. It is concluded that, first, AMA not only cannot be moral, but should not be that, since the inclusion of any moral principle would imply the necessity for the individual to choose it, making the choice of the principle itself immoral. Second, although AI has no will as such, which prima facie makes not only moral but also legal action impossible, it can still act legally in the sense of conforming to legal law, since AI carries a quasi-human will. Thus, it is necessary that the creation of AI should be based not on moral principles, but on legal law that prioritises human freedom and rights.
Publisher
Immanuel Kant Baltic Federal University
Reference23 articles.
1. Allen, C., Smit, I. and Wallach, W., 2005. Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches. Ethics and Information Technology, 7, pp. 149-155. https://doi.org/10.1007/s10676-006-0004-4.
2. Allen, C., Varner, G. and Zinser, J., 2000. Prolegomena to Any Future Artificial Moral Agent. Journal of Experimental and Theoretical Artificial Intelligence, 12, pp. 251-261. https://doi.org/10.1080/09528130050111428.
3. Allen, C., Wallach, W. and Smit, I., 2006. Why Machine Ethics? IEEE Intelligent Systems, 21(4), pp. 12-17. https://doi.org/10.1109/MIS.2006.83.
4. Anderson, M. and Anderson, S.L., 2007a. The Status of Machine Ethics: A Report from the AAAI Symposium. Minds and Machines, 17, pp. 1-10. https://doi.org/10.1007/s11023-007-9053-7.
5. Anderson, M. and Anderson, S.L., 2007b. Machine Ethics: Creating an Ethical Intelligent Agent. AI Magazine, 28(4), pp. 15-26. https://doi.org/10.1609/aimag.v28i4.2065.