Affiliation:
1. Oklahoma State University, Stillwater, USA
Abstract
The current approaches to explaining black box machine learning models have primarily been based on the intuition of model developers, rather than being informed by end-user needs or existing literature. Our goal is to utilize existing cognitive science and human factors research to design explanation displays. To achieve this, we used the Cleveland Heart Disease Data Set to create an eXtreme Gradient Boosting heart disease prediction model. We established an initial context of use to inform the design of a prototype explanation display. Our design choices were based on cognitive chunk organization, and we used SHapley Additive exPlanation to generate instance-level explanations for our model. Model evaluation showed good performance, and we developed four prototype explanation displays. Our work demonstrates that it is feasible to design multiple prototype explanation displays for complex machine learning models by organizing features in a structured manner. We also provide a set of steps that can be followed for designing and evaluating user-centered explanations in healthcare.
Funder
U.S. National Science Foundation
Reference34 articles.
1. Trends and Trajectories for Explainable, Accountable and Intelligible Systems
2. Arrieta A. B., Díaz-Rodríguez N., Del Ser J., Bennetot A., Tabik S., Barbado A., García S., Gil-López S., Molina D., Benjamins R., Chatila R., Herrera F. (2019). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. ArXiv:1910.10045 [Cs]. http://arxiv.org/abs/1910.10045
3. Prediction across healthcare settings: a case study in predicting emergency department disposition
4. A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare