Abstract
AbstractCan Artificial Intelligence (AI) be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual's capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these proposals, this article proposes a virtual assistant that, through dialogue, neutrality and virtual reality technologies, can teach users to make better moral decisions on their own. The author concludes that, as long as certain precautions are taken in its design, such an assistant could do this better than a human instructor adopting the same educational methodology.
Publisher
Springer Science and Business Media LLC
Subject
Management of Technology and Innovation,Health Policy,Issues, ethics and legal aspects,Health (social science)
Reference123 articles.
1. Abt, C. C. (1987). Serious Games. University Press of America.
2. Adamson, P. (2015). Philosophy in the hellenistic and roman worlds: A history of philosophy without any gaps (Vol. 2). Oxford University Press.
3. Agar, N. (2010). Enhancing genetic virtue? Politics and the Life Sciences, 29(1), 73–75.
4. Agar, N. (2015). Moral bioenhancement is dangerous. Journal of Medical Ethics, 41, 343–345.
5. Ahn, S. J., Le, A. M., & Bailenson, J. (2013). The effect of embodied experiences on self-other merging, attitude, and helping behaviour. Media Psychology, 16(1), 7–38.
Cited by
19 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献