Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review

Author:

Xu Qian12345,Xie Wenzhao4,Liao Bolin3,Hu Chao6,Qin Lu2,Yang Zhengzijin2,Xiong Huan2,Lyu Yi2,Zhou Yue2,Luo Aijing145ORCID

Affiliation:

1. The Second Xiangya Hospital of Central South University, No. 139, Renmin Road Central, Changsha, Hunan, China

2. School of Life Sciences, Central South University, Changsha, Hunan, China

3. College of Computer Science and Engineering, Jishou University, Jishou, Hunan, China

4. Key Laboratory of Medical Information Research, The Third Xiangya Hospital, Central South University, College of Hunan Province, Changsha, Hunan, China

5. Clinical Research Center for Cardiovascular Intelligent Healthcare, Changsha, Hunan, China

6. Big Data Institute, Central South University, Changsha 410083, China

Abstract

Background. Artificial intelligence (AI) has developed rapidly, and its application extends to clinical decision support system (CDSS) for improving healthcare quality. However, the interpretability of AI-driven CDSS poses significant challenges to widespread application. Objective. This study is a review of the knowledge-based and data-based CDSS literature regarding interpretability in health care. It highlights the relevance of interpretability for CDSS and the area for improvement from technological and medical perspectives. Methods. A systematic search was conducted on the interpretability-related literature published from 2011 to 2020 and indexed in the five databases: Web of Science, PubMed, ScienceDirect, Cochrane, and Scopus. Journal articles that focus on the interpretability of CDSS were included for analysis. Experienced researchers also participated in manually reviewing the selected articles for inclusion/exclusion and categorization. Results. Based on the inclusion and exclusion criteria, 20 articles from 16 journals were finally selected for this review. Interpretability, which means a transparent structure of the model, a clear relationship between input and output, and explainability of artificial intelligence algorithms, is essential for CDSS application in the healthcare setting. Methods for improving the interpretability of CDSS include ante-hoc methods such as fuzzy logic, decision rules, logistic regression, decision trees for knowledge-based AI, and white box models, post hoc methods such as feature importance, sensitivity analysis, visualization, and activation maximization for black box models. A number of factors, such as data type, biomarkers, human-AI interaction, needs of clinicians, and patients, can affect the interpretability of CDSS. Conclusions. The review explores the meaning of the interpretability of CDSS and summarizes the current methods for improving interpretability from technological and medical perspectives. The results contribute to the understanding of the interpretability of CDSS based on AI in health care. Future studies should focus on establishing formalism for defining interpretability, identifying the properties of interpretability, and developing an appropriate and objective metric for interpretability; in addition, the user's demand for interpretability and how to express and provide explanations are also the directions for future research.

Funder

Clinical Research Center for Cardiovascular Intelligent Healthcare in Hunan Province

Publisher

Hindawi Limited

Subject

Health Informatics,Biomedical Engineering,Surgery,Biotechnology

Cited by 5 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Uses of AI in Field of Radiology- What is State of Doctor & Pateints Communication in Different Disease for Diagnosis Purpose;Journal for Research in Applied Sciences and Biotechnology;2023-10-25

2. Artificial intelligence for emergency medical care;Health Care Science;2023-10-13

3. ChatGPT and Clinical Decision Support: Scope, Application, and Limitations;Annals of Biomedical Engineering;2023-07-29

4. Explainable Artificial Intelligence in Clinical Decision Support Systems;2023 IV International Conference on Neural Networks and Neurotechnologies (NeuroNT);2023-06-16

5. Machine Learning-based Clinical Decision Support for Infection Risk Prediction;2023-05-01

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3