Abstract
Abstract
In response to widespread use of automated decision-making technology, some have considered a right to explanation. In this article, I draw on insights from philosophical work on explanation to present a series of challenges to this idea, showing that the normative motivations for access to such explanations ask for something difficult, if not impossible, to extract from automated systems. I consider an alternative, outcomes-focused approach to the normative evaluation of automated decision making and recommend it as a way to pursue the goods originally associated with explainability.
Publisher
Cambridge University Press (CUP)
Reference49 articles.
1. Microsoft (2022) Responsible AI Impact Assessment Guide. https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-RAI-Impact-Assessment-Guide.pdf
2. General theories of explanation: buyer beware
3. Roose, Kevin . (2020) Rabbit Hole. New York Times, Podcast. https://www.nytimes.com/column/rabbit-hole .
4. Grounding in the image of causation
5. European Union [EU]. (2021) Proposal: The Artificial Intelligence Act https://artificialintelligenceact.eu/ .
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献