Affiliation:
1. Department of Psychology, School of Philosophy, Wuhan University, Wuhan 430079, China
2. School of Marxism, Tsinghua University, Beijing 100084, China
Abstract
Artificial intelligence has quickly integrated into human society and its moral decision-making has also begun to slowly seep into our lives. The significance of moral judgment research on artificial intelligence behavior is becoming increasingly prominent. The present research aims at examining how people make moral judgments about the behavior of artificial intelligence agents in a trolley dilemma where people are usually driven by controlled cognitive processes, and in a footbridge dilemma where people are usually driven by automatic emotional responses. Through three experiments (n = 626), we found that in the trolley dilemma (Experiment 1), the agent type rather than the actual action influenced people’s moral judgments. Specifically, participants rated AI agents’ behavior as more immoral and deserving of more blame than humans’ behavior. Conversely, in the footbridge dilemma (Experiment 2), the actual action rather than the agent type influenced people’s moral judgments. Specifically, participants rated action (a utilitarian act) as less moral and permissible and more morally wrong and blameworthy than inaction (a deontological act). A mixed-design experiment provided a pattern of results consistent with Experiment 1 and Experiment 2 (Experiment 3). This suggests that in different types of moral dilemmas, people adapt different modes of moral judgment to artificial intelligence, this may be explained by that when people make moral judgments in different types of moral dilemmas, they are engaging different processing systems.
Funder
National Social Science Foundation of China
National Natural Science Foundation of China
Subject
Behavioral Neuroscience,General Psychology,Genetics,Development,Ecology, Evolution, Behavior and Systematics
Reference65 articles.
1. Integrating socially assistive robotics into mental healthcare interventions: Applications and recommendations for expanded use;Rabbitt;Clin. Psychol. Rev.,2015
2. Will my next car be a libertarian or a utilitarian? Who will decide?;Fournier;IEEE Veh. Technol. Mag.,2016
3. Angwin, J., Larson, J., Surya, M., and Lauren, K. (2022, December 30). Machine Bias. Available online: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminalsentencing.
4. Moulliet, D., Stolzenbach, J., Majonek, A., and Völker, T. (2022, December 30). The expansion of Robo-Advisory in Wealth Management. Available online: https://www2.deloitte.com/content/dam/Deloitte/de/Documents/financialservices/Deloitte-Robo-safe.pdf.
5. When are artificial intelligence versus human agents faulted for wrongdoing? Moral attributions after individual and joint decisions;Shank;Inf. Commun. Soc.,2019
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献