Moral Judgments of Human vs. AI Agents in Moral Dilemmas

Author:

Zhang Yuyan1ORCID,Wu Jiahua1,Yu Feng1,Xu Liying2

Affiliation:

1. Department of Psychology, School of Philosophy, Wuhan University, Wuhan 430079, China

2. School of Marxism, Tsinghua University, Beijing 100084, China

Abstract

Artificial intelligence has quickly integrated into human society and its moral decision-making has also begun to slowly seep into our lives. The significance of moral judgment research on artificial intelligence behavior is becoming increasingly prominent. The present research aims at examining how people make moral judgments about the behavior of artificial intelligence agents in a trolley dilemma where people are usually driven by controlled cognitive processes, and in a footbridge dilemma where people are usually driven by automatic emotional responses. Through three experiments (n = 626), we found that in the trolley dilemma (Experiment 1), the agent type rather than the actual action influenced people’s moral judgments. Specifically, participants rated AI agents’ behavior as more immoral and deserving of more blame than humans’ behavior. Conversely, in the footbridge dilemma (Experiment 2), the actual action rather than the agent type influenced people’s moral judgments. Specifically, participants rated action (a utilitarian act) as less moral and permissible and more morally wrong and blameworthy than inaction (a deontological act). A mixed-design experiment provided a pattern of results consistent with Experiment 1 and Experiment 2 (Experiment 3). This suggests that in different types of moral dilemmas, people adapt different modes of moral judgment to artificial intelligence, this may be explained by that when people make moral judgments in different types of moral dilemmas, they are engaging different processing systems.

Funder

National Social Science Foundation of China

National Natural Science Foundation of China

Publisher

MDPI AG

Subject

Behavioral Neuroscience,General Psychology,Genetics,Development,Ecology, Evolution, Behavior and Systematics

Reference65 articles.

1. Integrating socially assistive robotics into mental healthcare interventions: Applications and recommendations for expanded use;Rabbitt;Clin. Psychol. Rev.,2015

2. Will my next car be a libertarian or a utilitarian? Who will decide?;Fournier;IEEE Veh. Technol. Mag.,2016

3. Angwin, J., Larson, J., Surya, M., and Lauren, K. (2022, December 30). Machine Bias. Available online: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminalsentencing.

4. Moulliet, D., Stolzenbach, J., Majonek, A., and Völker, T. (2022, December 30). The expansion of Robo-Advisory in Wealth Management. Available online: https://www2.deloitte.com/content/dam/Deloitte/de/Documents/financialservices/Deloitte-Robo-safe.pdf.

5. When are artificial intelligence versus human agents faulted for wrongdoing? Moral attributions after individual and joint decisions;Shank;Inf. Commun. Soc.,2019

Cited by 4 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Psychological and Brain Responses to Artificial Intelligence’s Violation of Community Ethics;Cyberpsychology, Behavior, and Social Networking;2024-08-01

2. AI and Warfare: A Rational Choice Approach;Eastern Economic Journal;2024-06-19

3. Editorial: Moral psychology of AI;Frontiers in Psychology;2024-03-11

4. Do Moral Judgments in Moral Dilemmas Make One More Inclined to Choose a Medical Degree?;Behavioral Sciences;2023-06-05

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3