Explainable machine learning for decision support in healthcare: A scoping review (Preprint)

Author:

Shulha MichaelORCID,Rahimi SamiraORCID,Sandhu AmritaORCID,Sharma GauriORCID,D'Souza VinitaORCID,Harmouche RolaORCID,Hovdebo JordanORCID

Abstract

BACKGROUND

The uptake of machine learning based decisions support has faced challenges in real world clinical scenarios. A key reason has been that clinicians lack trust in black box machine learning models. One approach to this challenge is through the use of explainable approaches to machine learning, which ideally allow an end user to understand why a specific prediction is being made.

OBJECTIVE

The study aimed to describe the scope of explainable machine learning (XML) research in clinical decision support, and identify approaches and frameworks that have been used to study end-user perceptions of explainability.

METHODS

Following PRISMA guidelines, a search protocol was developed and executed in Ovid MEDLINE ALL(R), EMBASE Classic + EMBASE, Web of Science Core Collection, CINAHL Cochrane Library CENTRAL (Trials) to identify eligible articles. Studies describing the testing, piloting, or implementation of explainable machine learning tools designed to support clinical decision making were eligible for synthesis. We summarized the scope of machine learning methods, the clinical scope, intended end user, and decision focus. In a sub analysis, we also summarized the design and visual elements employed by researchers and the associated methodological approaches used to assess end-user perceptions of explainability. Finally, we conducted a thematic analysis to better understand the perceived potential health system benefits, and clinical end-user benefits with explainable machine learning based decision support.

RESULTS

We found the majority of studies focused on the development of tools for doctors as the intended users (85%) for diagnostic support (45%) in the context of secondary care (55%). Explainability methods were highly varied with the majority of studies using a unique explainability model (76%). Only 12% discussed some type of testing phase to assess the suitability of explainability methods with clinical end users. Improved end-user trust in machine learning and AI tools was the most common cited potential benefit.

CONCLUSIONS

The majority of research appears focused on the mechanics of developing explainable machine learning models, with little attention paid to the clinical end-user experience. While increased trust in machine learning tools is often cited as a potential outcome of well implemented explainability, there is little discussion of how this can be effectively measured and operationalized. Ultimately, improved alignment between research, implementation, and medical education will serve to benefit the advancement of XML for clinical decision support and the capacity of these types of tools to benefit healthcare.

Publisher

JMIR Publications Inc.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3