Abstract
AbstractArtificial agents have become increasingly prevalent in human social life. In light of the diversity of new human–machine interactions, we face renewed questions about the distribution of moral responsibility. Besides positions denying the mere possibility of attributing moral responsibility to artificial systems, recent approaches discuss the circumstances under which artificial agents may qualify as moral agents. This paper revisits the discussion of how responsibility might be distributed between artificial agents and human interaction partners (including producers of artificial agents) and raises the question of whether attributions of responsibility should remain entirely on the human side. While acknowledging a crucial difference between living human beings and artificial systems culminating in an asymmetric feature of human–machine interactions, this paper investigates the extent to which artificial agents may reasonably be attributed a share of moral responsibility. To elaborate on criteria that can justify a distribution of responsibility in certain human–machine interactions, the role of types of criteria (interaction-related criteria and criteria that can be deferred from socially constructed responsibility relationships) is examined. Thereby, the focus will lay on the evaluation of potential criteria referring to the fact that artificial agents surpass in some aspects the capacities of humans. This is contrasted with socially constructed responsibility relationships that do not take these criteria into account. In summary, situations are examined in which it seems plausible that moral responsibility can be distributed between artificial and human agents.
Funder
Ludwig-Maximilians-Universität München
Publisher
Springer Science and Business Media LLC
Reference27 articles.
1. Hortensius, R., Cross, E.S.: From automata to animate beings: the scope and limits of attributing socialness to artificial agents. Ann. N. Y. Acad. Sci. 1426, 93–110 (2018)
2. Wykowska, A., Chaminade, T., Cheng, G.: Embodied artificial agents for understanding human social cognition. Phil. Trans. R. Soc. London ser. B. Biol. Sci. 371, 20150375 (2016)
3. Nida-Rümelin, J., Weidenfeld, N.: Digitaler Humanismus: Eine Ethik für das Zeitalter der Künstlichen Intelligenz. Piper Verlag, Munich (2018)
4. Bryson, J.: Robots should be slaves. In: Wilks, Y. (ed.) Close engagements with artificial companions: key social, psychological, ethical and design issues, pp. 63–74. John Benjamins Publishing, Amsterdam (2010)
5. Floridi, L., Sanders, J.W.: On the morality of artificial agents. Mind. Mach. (2004). https://doi.org/10.1023/B:MIND.0000035461.63578.9d
Cited by
11 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献