Abstract
AbstractGiven the pervasiveness of AI systems and their potential negative effects on people’s lives (especially among already marginalised groups), it becomes imperative to comprehend what goes on when an AI system generates a result, and based on what reasons, it is achieved. There are consistent technical efforts for making systems more “explainable” by reducing their opaqueness and increasing their interpretability and explainability. In this paper, we explore an alternative non-technical approach towards explainability that complement existing ones. Leaving aside technical, statistical, or data-related issues, we focus on the very conceptual underpinnings of the design decisions made by developers and other stakeholders during the lifecycle of a machine learning project. For instance, the design and development of an app to track snoring to detect possible health risks presuppose some picture or another of “health”, which is a key notion that conceptually underpins the project. We take it as a premise that these key concepts are necessarily present during design and development, albeit perhaps tacitly. We argue that by providing “justificatory explanations” about how the team understands the relevant key concepts behind its design decisions, interested parties could gain valuable insights and make better sense of the workings and outcomes of systems. Using the concept of “health”, we illustrate how a particular understanding of it might influence decisions during the design and development stages of a machine learning project, and how making this explicit by incorporating it into ex-post explanations might increase the explanatory and justificatory power of these explanations. We posit that a greater conceptual awareness of the key concepts that underpin design and development decisions may be beneficial to any attempt to develop explainability methods. We recommend that “justificatory explanations” are provided as technical documentation. These are declarative statements that contain at its simplest: (1) a high-level account of the understanding of the relevant key concepts a team possess related to a project’s main domain, (2) how these understandings drive decision-making during the life-cycle stages, and (3) it gives reasons (which could be implicit in the account) that the person or persons doing the explanation consider to have plausible justificatory power for the decisions that were made during the project.
Funder
ministerio de economía, industria y competitividad, gobierno de españa
generalitat de catalunya
Universitat Autònoma de Barcelona
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Human-Computer Interaction,Philosophy
Reference59 articles.
1. Adamson AS, Smith A (2018) Machine learning and health care disparities in dermatology. JAMA Dermatol 154(11):1247–1248
2. Alexander M (2019) The new jim crow: mass incarceration in the age of colourblindness. Penguin, London
3. Alvarez M (2017) Reasons for action: justification, motivation, explanation. The Stanford Encyclopedia of Philosophy (Winter 2017 Edition). https://plato.stanford.edu/archives/win2017/entries/reasons-just-vs-expl. Accessed 15 Apr 2021
4. Angwin J, Larson J, Mattu S, Kirchner L (2016) Machine bias. Pro-Publica. Retrieved 1/12/2021 https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Accessed 1 Dec 2021
5. Ashmore R, Calinescu R, Paterson C (2021) Assuring the machine learning lifecycle: desiderata, methods, and challenges. ACM Comput Surv 54(5):1–39. https://doi.org/10.1145/3453444
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献