Abstract
Abstract
This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence (AI) technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws attention to the issues of transparency and explainability. In contrast to standard discussions, however, it is then argued that this knowledge problem regarding agents of responsibility is linked to the other side of the responsibility relation: the addressees or “patients” of responsibility, who may demand reasons for actions and decisions made by using AI. Inspired by a relational approach, responsibility as answerability thus offers an important additional, if not primary, justification for explainability based, not on agency, but on patiency.
Publisher
Springer Science and Business Media LLC
Subject
Management of Technology and Innovation,Health Policy,Issues, ethics and legal aspects,Health (social science)
Reference40 articles.
1. Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160.
2. Aristotle. (1984). Nicomachean ethics. In J. Barnes (Ed.), The complete works of aristotle (Vol. 2, pp. 1729–1867). Princeton: Princeton University Press.
3. Bostrom, N. (2014). Superintelligence. Oxford: Oxford University Press.
4. Bryson, J. (2016). Patiency is not a virtue: AI and the design of ethical systems. In AAAI spring symposium series. Ethical and Moral Considerations in Non-Human Agents. Retrieved from 4, Sept 2018,
http://www.aaai.org/ocs/index.php/SSS/SSS16/paper/view/12686
.
5. Caliskan, A., Bryson, J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356, 183–186.
Cited by
194 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献