Author:
Dastani Mehdi,Yazdanpanah Vahid
Abstract
AbstractTo support the trustworthiness of AI systems, it is essential to have precise methods to determine what or who is to account for the behaviour, or the outcome, of AI systems. The assignment of responsibility to an AI system is closely related to the identification of individuals or elements that have caused the outcome of the AI system. In this work, we present an overview of approaches that aim at modelling responsibility of AI systems, discuss their advantages and shortcomings to deal with various aspects of the notion of responsibility, and present research gaps and ways forward.
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Human-Computer Interaction,Philosophy
Reference39 articles.
1. Alechina N, Dastani M, Logan B (2014) Norm approximation for imperfect monitors. In: Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems, pp 117–124
2. Benjamins R (2021) A choices framework for the responsible use of AI. AI Ethics 1(1):49–53
3. Braham M, VanHees M (2011) Responsibility voids. Philos Q 61(242):6–15
4. Braham M, van Hees M (2012) An anatomy of moral responsibility. Mind 121(483):601–634
5. Bratman ME (2013) Shared agency: a planning theory of acting together. Oxford University Press, Oxford
Cited by
13 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献