Abstract
The explanation of the decisions provided by a model are crucial in a domain such as medical diagnosis. With the advent of deep learning, it is very important to explain why a classification is reached by a model. This work tackles the transparency problem of convolutional neural networks(CNNs). We propose to generate propositional rules from CNNs, because they are intuitive to the way humans reason. Our method considers that a CNN is the union of two subnetworks: a multi-layer erceptron (MLP) in the fully connected layers; and a subnetwork including several 2D convolutional layers and max-pooling layers. Rule extraction exhibits two main steps, with each step generating rules from each subnetwork of the CNN. In practice, we approximate the two subnetworks by two particular MLP models that makes it possible to generate propositional rules. We performed the experiments with two datasets involving images: MNISTdigit recognition; and skin-cancer diagnosis. With high fidelity, the extracted rules designated the location of discriminant pixels, as well as the conditions that had to be met to achieve the classification. We illustrated several examples of rules by their centroids and their discriminant pixels.
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Reference41 articles.
1. Parallel distributed processing;McClelland;Explor. Microstruct. Cogn.,1986
2. What do we need to build explainable AI systems for the medical domain?;Holzinger;arXiv,2017
3. Survey and critique of techniques for extracting rules from trained artificial neural networks
4. Local rule-based explanations of black box decision systems;Guidotti;arXiv,2018
Cited by
12 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献