Affiliation:
1. Computer Science and Engineering Indian Institute of Information Technology, Design and Manufacturing ‐ Kancheepuram Chennai Tamil Nadu India
2. Computer Science and Engineering IIT Madras Chennai Tamil Nadu India
Abstract
AbstractThere is a growing trend of using artificial intelligence, particularly deep learning algorithms, in medical diagnostics, revolutionizing healthcare by improving efficiency, accuracy, and patient outcomes. However, the use of artificial intelligence in medical diagnostics comes with the critical need to explain the reasoning behind artificial intelligence‐based predictions and ensure transparency in decision‐making. Explainable artificial intelligence has emerged as a crucial research area to address the need for transparency and interpretability in medical diagnostics. Explainable artificial intelligence techniques aim to provide insights into the decision‐making process of artificial intelligence systems, enabling clinicians to understand the factors the algorithms consider in reaching their predictions. This paper presents a detailed review of saliency‐based (visual) methods, such as class activation methods, which have gained popularity in medical imaging as they provide visual explanations by highlighting the regions of an image most influential in the artificial intelligence's decision. We also present the literature on non‐visual methods, but the focus will be on visual methods. We also use the existing literature to experiment with infrared breast images for detecting breast cancer. Towards the end of this paper, we also propose an “attention guided Grad‐CAM” that enhances the visualizations for explainable artificial intelligence. The existing literature shows that explainable artificial intelligence techniques are not explored in the context of infrared medical images and opens up a wide range of opportunities for further research to make clinical thermography into assistive technology for the medical community.