Abstract
AbstractBoth robots and humans can behave in ways that engender positive and negative evaluations of their behaviors and associated responsibility. However, extant scholarship on the link between agent evaluations and valenced behavior has generally treated moral behavior as a monolithic phenomenon and largely focused on moral deviations. In contrast, contemporary moral psychology increasingly considers moral judgments to unfold in relation to a number of moral foundations (care, fairness, authority, loyalty, purity, liberty) subject to both upholding and deviation. The present investigation seeks to discover whether social judgments of humans and robots emerge differently as a function of moral foundation-specific behaviors. This work is conducted in two studies: (1) an online survey in which agents deliver observed/mediated responses to moral dilemmas and (2) a smaller laboratory-based replication with agents delivering interactive/live responses. In each study, participants evaluate the goodness of and blame for six foundation-specific behaviors, and evaluate the agent for perceived mind, morality, and trust. Across these studies, results suggest that (a) moral judgments of behavior may be agent-agnostic, (b) all moral foundations may contribute to social evaluations of agents, and (c) physical presence and agent class contribute to the assignment of responsibility for behaviors. Findings are interpreted to suggest that bad behaviors denote bad actors, broadly, but machines bear a greater burden to behave morally, regardless of their credit- or blame-worthiness in a situation.
Funder
U.S. Air Force Office of Scientific Research
Publisher
Springer Science and Business Media LLC
Subject
General Computer Science,Human-Computer Interaction,Philosophy,Electrical and Electronic Engineering,Control and Systems Engineering,Social Psychology
Cited by
30 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献