Abstract
AbstractWhen artificial intelligence (AI) is used to make high-stakes decisions, some worry that this will create a morally troubling responsibility gap—that is, a situation in which nobody is morally responsible for the actions and outcomes that result. Since the responsibility gap might be thought to result from individuals lacking knowledge of the future behavior of AI systems, it can be and has been suggested that deploying explainable artificial intelligence (XAI) techniques will help us to avoid it. These techniques provide humans with certain forms of understanding of the systems in question. In this paper, I consider whether existing XAI techniques can indeed close the responsibility gap. I identify a number of significant limits to their ability to do so. Ensuring that responsibility for AI-assisted outcomes is maintained may require using different techniques in different circumstances, and potentially also developing new techniques that can avoid each of the issues identified.
Publisher
Springer Science and Business Media LLC
Reference39 articles.
1. Abney K (2013) Autonomous robots and the future of just war theory. In: Allhoff F, Evans NG, Henschke A (eds) Routledge handbook of ethics and war. Routledge, Abingdon & New York, pp 338–351
2. Aristotle (1984) Nicomachean ethics, In: Barnes J (ed) The complete works of Aristotle: revised oxford translation. Princeton University Press, Princeton
3. Arkin RC (2009) Governing lethal behaviour in autonomous robots. CRC Press, Boca Raton
4. Bagnoli C (2016) Defeaters and practical knowledge. Synthese 195:2855–2875. https://doi.org/10.1007/s11229-016-1095-z
5. Baum K, Mantel S, Schmidt E, Speith T (2022) From responsibility to reason-giving explainable artificial intelligence. Philos Technol 35(1):1–30. https://doi.org/10.1007/s13347-022-00510-w
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献