Affiliation:
1. DS Partnership, UK
2. Glasgow Caledonian University, UK
3. University of Strathclyde, UK
Abstract
Machine learning (ML) applications hold significant promise for innovation within healthcare; however, their full potential has not yet been realised, with limited reports of their clinical and cost benefits in clinical practice. This is due to complex clinical, ethical, and legal questions arising from the lack of understanding about how some ML models operate and come to make decisions. eXplainable AI (XAI) is an approach to help address this problem and make ML models understandable. This chapter reports on a systematic literature review investigating the use of XAI in healthcare within the last six years. Three research questions identified as issues in the literature were examined around how bias was dealt with, which XAI techniques were used, and how the applications were evaluated. Findings show that other than class imbalance and missing values, no other types of bias were accounted for in the shortlisted papers. There were no evaluations of the explainability outputs with clinicians and none of the shortlisted papers used an interventional study or RCT.