Abstract
AbstractWe argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to be responsible for her decision, the human in the loop has to have an explanation available of the system’s recommendation. Reason explanations are especially well-suited to this end, and we examine whether—and how—it might be possible to make such explanations fit with AI systems. We support our claims by focusing on a case of disagreement between human in the loop and AI system.
Funder
volkswagen foundation
Deutsche Forschungsgemeinschaft
Technische Universität Dortmund
Publisher
Springer Science and Business Media LLC
Subject
History and Philosophy of Science,Philosophy
Reference99 articles.
1. Alvarez, M. (2010). Kinds of Reasons: An Essay in the Philosophy of Action. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199550005.001.0001
2. Alvarez, M. (2017). Reasons for Action: Justification, Motivation, Explanation. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/win2017/entries/reasons-just-vs-expl/
3. Amgoud, L., & Prade, H. (2009). Using Arguments for Making and Explaining Decisions. Artificial Intelligence, 173(3–4), 413–436. https://doi.org/10.1016/j.artint.2008.11.006
4. Anscombe, G. E. M. (1962). Intention. Blackwell Press.
5. Armstrong, S., Sandberg, A., & Bostrom, N. (2012). Thinking Inside the Box: Controlling and Using an Oracle AI. Minds and Machines, 22(4), 299–324. https://doi.org/10.1007/s11023-012-9282-2
Cited by
38 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献