Abstract
AbstractIn recent years, artificial intelligence systems have come to the forefront. These systems, mostly based on deep learning, achieve excellent results in areas such as image processing, natural language processing and speech recognition. Despite the statistically high accuracy of deep learning models, their output is often based on ”black box” decisions. Thus, interpretability methods (Reyes et al. in Radiol Artif Intell 2(3):e190043, 2020) have become a popular way to gain insight into the decision-making process of deep learning models (Miller in Artif Intell 267:1–38, 2019). Explanation of deep learning models is desirable in the medical domain since experts have to justify their judgments to the patients. In this work, we proposed a method for explanation-guided training that uses a layer-wise relevance propagation technique to force the model to focus only on the relevant part of the image. We experimentally verified our method on a convolutional neural network model for low-grade and high-grade glioma classification problems. Our experiments produced promising results in the way where we use interpretation techniques in the training process.
Funder
Siemens Healthineers
Slovak Technical University
Publisher
Springer Science and Business Media LLC
Subject
Electrical and Electronic Engineering,Applied Mathematics,Artificial Intelligence,Computational Theory and Mathematics,Computer Networks and Communications,Computer Science Applications,Information Systems
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献