Abstract
PurposeThe widespread usage of artificial intelligence (AI) is prompting a number of ethical issues, including those involving concerns for fairness, surveillance, transparency, neutrality and human rights. The purpose of this manuscript is to explore possibility of developing cognitive morality in AI systems.Design/methodology/approachThis is explorative research. The manuscript investigates the likelihood of cognitive moral development in AI systems as well as potential pathways for such development. Concurrently, it proposes a novel idea for the characterization and development of ethically conscious and artificially intelligent robotic machines.FindingsThis manuscript explores the possibility of categorizing AI machines according to the level of cognitive morality they embody, and while doing so, it makes use of Lawrence Kohlberg's study related to cognitive moral development in humans. The manuscript further suggests that by providing appropriate inputs to AI machines in accordance with the proposed concept, humans may assist in the development of an ideal AI creature that would be morally more responsible and act as moral agents, capable of meeting the demands of morality.Research limitations/implicationsThis manuscript has some restrictions because it focuses exclusively on Kohlberg's perspective. This theory is not flawless. Carol Gilligan, one of Kohlberg's former doctoral students, said that Kohlberg's proposal was unfair and sexist because it didn't take into account the views and experiences of women. Even if one follows the law, they may still be engaging in immoral behaviour, as Kohlberg argues, because laws and social norms are not perfect. This study makes it easier for future research in the field to look at how the ideas of people like Joao Freire and Carl Rogers can be used in AI systems.Originality/valueIt is an original research that derives inspiration from the cognitive moral development theory of American Professor named Lawrence Kohlberg. The authors present a fresh way of thinking about how to classify AI systems, which should make it easier to give robots cognitive morality.
Reference67 articles.
1. Bean, R. (2017), “How big data is empowering AI and machine learning at scale”, available at: https://sloanreview.mit.edu/article/how-big-data-is-empowering-ai-and-machine-learning-at-scale/ (accessed 25 November 2017).
2. Profiles: A.I;The New Yorker,1981
3. Artificial intelligence policy: a primer and roadmap;UC Davis Law Review,2017
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献