Explainable Artificial Intelligence for Bias Detection in COVID CT-Scan Classifiers

Author:

Palatnik de Sousa IamORCID,Vellasco Marley M. B. R.ORCID,Costa da Silva EduardoORCID

Abstract

Problem: An application of Explainable Artificial Intelligence Methods for COVID CT-Scan classifiers is presented. Motivation: It is possible that classifiers are using spurious artifacts in dataset images to achieve high performances, and such explainable techniques can help identify this issue. Aim: For this purpose, several approaches were used in tandem, in order to create a complete overview of the classificatios. Methodology: The techniques used included GradCAM, LIME, RISE, Squaregrid, and direct Gradient approaches (Vanilla, Smooth, Integrated). Main results: Among the deep neural networks architectures evaluated for this image classification task, VGG16 was shown to be most affected by biases towards spurious artifacts, while DenseNet was notably more robust against them. Further impacts: Results further show that small differences in validation accuracies can cause drastic changes in explanation heatmaps for DenseNet architectures, indicating that small changes in validation accuracy may have large impacts on the biases learned by the networks. Notably, it is important to notice that the strong performance metrics achieved by all these networks (Accuracy, F1 score, AUC all in the 80 to 90% range) could give users the erroneous impression that there is no bias. However, the analysis of the explanation heatmaps highlights the bias.

Funder

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior

Publisher

MDPI AG

Subject

Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry

Reference27 articles.

1. COVID-CT-Dataset: A CT scan dataset about COVID-19;Zhao;arXiv,2020

2. COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images

3. COVID-19 Image Data Collection: Prospective Predictions Are the Future;Cohen;arXiv,2020

4. DLAI3 Hackathon Phase3 COVID-19 CXR Challengehttps://www.kaggle.com/c/dlai3-phase3/overview

5. Coronavirus disease (COVID-19) detection in Chest X-Ray images using majority voting based classifier ensemble

Cited by 23 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Toward Human-centered XAI in Practice: A survey;Machine Intelligence Research;2024-01-12

2. A scoping review of interpretability and explainability concerning artificial intelligence methods in medical imaging;European Journal of Radiology;2023-12

3. Artificial Intelligence and Infectious Disease Imaging;The Journal of Infectious Diseases;2023-10-01

4. Explainable Artificial Intelligence in Healthcare Applications: A Systematic Review;2023 International Scientific Conference on Computer Science (COMSCI);2023-09-18

5. Open your black box classifier;Healthcare Technology Letters;2023-08-29

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3