Abstract
AbstractThe transfer of tasks with sometimes far-reaching implications to autonomous systems raises a number of ethical questions. In addition to fundamental questions about the moral agency of these systems, behavioral issues arise. We investigate the empirically accessible question of whether the imposition of harm by an agent is systematically judged differently when the agent is artificial and not human. The results of a laboratory experiment suggest that decision-makers can actually avoid punishment more easily by delegating to machines than by delegating to other people. Our results imply that the availability of artificial agents could provide stronger incentives for decision-makers to delegate sensitive decisions.
Funder
Bayerische Akademie der Wissenschaften
Publisher
Springer Science and Business Media LLC
Subject
Management of Technology and Innovation,Health Policy,Issues, ethics and legal aspects,Health (social science)
Reference63 articles.
1. Allen, C., & Wallach, W. (2012). Moral machines: Contradiction in terms or abdication of human responsibility. In Patrick Lin, Keith Abney & George A. Bekey (Eds.), The Ethical and Social Implications of Robotics (pp. 55–68). MIT Press.
2. Appel, M., Izydorczyk, D., Weber, S., Mara, M., & Lischetzke, T. (2020). The un- canny of mind in a machine: Humanoid robots as tools, agents and experiencers. Computers in Human Behavior, 102, 274–286.
3. Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence and perceived safety of robots. International Journal of Social Robotics, 1(1), 71–81.
4. Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21–34.
5. Bigman, Y. E., Waytz, A., Alterovitz, R., & Gray, K. (2019). Holding robots responsible: The elements of machine morality. Trends in Cognitive Sciences, 23(5), 365–368.
Cited by
14 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献