Author:
Pumplun Luisa, ,Peters Felix,Gawlitza Joshua F.,Buxmann Peter, , ,
Abstract
Clinical decision support systems (CDSSs) based on machine learning (ML) hold great promise for
improving medical care. Technically, such CDSSs are already feasible but physicians have been
skeptical about their application. In particular, their opacity is a major concern, as it may lead physicians
to overlook erroneous outputs from ML-based CDSSs, potentially causing serious consequences for
patients. Research on explainable AI (XAI) offers methods with the potential to increase the
explainability of black-box ML systems. This could significantly accelerate the application of MLbased CDSSs in medicine. However, XAI research to date has mainly been technically driven and
largely neglects the needs of end users. To better engage the users of ML-based CDSSs, we applied a
design science approach to develop a design for explainable ML-based CDSSs that incorporates
insights from XAI literature while simultaneously addressing physicians’ needs. This design comprises
five design principles that designers of ML-based CDSSs can apply to implement user-centered
explanations, which are instantiated in a prototype of an explainable ML-based CDSS for lung nodule
classification. We rooted the design principles and the derived prototype in a body of justificatory
knowledge consisting of XAI literature, the concept of usability, and an online survey study involving
57 physicians. We refined the design principles and their instantiation by conducting walk-throughs
with six radiologists. A final experiment with 45 radiologists demonstrated that our design resulted in
physicians perceiving the ML-based CDSS as more explainable and usable in terms of the required
cognitive effort than a system without explanations.
Publisher
Association for Information Systems
Subject
Computer Science Applications,Information Systems
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献