Explainable AI in Healthcare

Author:

Upendran Shantha Visalakshi1ORCID

Affiliation:

1. Ethiraj College for Women, India

Abstract

With the advent of machine learning (ML)-based tools in the healthcare domain, various treatment methodologies like digital healthcare (HC) by integrating cross domain fusion from cross-modality imaging and non-imaging of health data and personalized treatments have been recommended to improve the overall efficacy of the healthcare systems. Due to the intensive need of skilled physicians to combat with the as the extraneous strength, the advantages of ML approaches include a larger range of functionalities such as filtering emails, identifying objects in images and analysing large volumes of complex interrelated data. It is observed that the massive amounts of healthcare data which have been generated everyday within electronic health records. In turn, the healthcare providers take a more predictive approach to come out with a more unified system which concentrates on clinical decision support, clinical practice development guidelines, and automated healthcare systems, thereby offering a broad range of features in precise manner such as improving patient data for better diagnosis, medical research for future references. This chapter provides a complete overview of a typical ML workflow comprises the predominant phases, namely data collection, data pre-processing, modelling, training, evaluation, tuning, and deployment, and the role of explainable artificial intelligence (XAI) mechanisms assists to integrate interoperability and explainability into the ML workflow. In general, XAI can be defined as the set of processes and methods that produces details or comprehensive justifications pertaining to the functioning of the model or easy to understand and trust the potential outcomes generated by ML techniques. The ultimate aim lies in explaining the interaction to the end user leads to a trustworthy environment. In addition to that, XAI assures the privileges with regard to the healthcare domain are dimension reduction, feature importance, attention mechanism, knowledge distillation, surrogate representations used to develop and validate a decision supporting tool using XAI. The positive growth of XAI nuanced the wider usage of aggregated, personalized health data to generate with ML models for diagnosis automation, prompt, and precise way of tailoring therapies with optimality and in a dynamic manner. XAI mechanisms ensure better decision making by letting the end-user know how the ML model derived the potential outcomes and medical results.

Publisher

IGI Global

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3