Affiliation:
1. College of Computer Science and Technology Chongqing University of Posts and Telecommunications Chongqing China
Abstract
AbstractAlthough deep convolution neural network (DCNN) has achieved great success in computer vision field, such models are considered to lack interpretability in decision‐making. One of fundamental issues is that its decision mechanism is considered to be a “black‐box” operation. The authors design the binary tree structure convolution (BTSC) module and control the activation level of particular neurons to build the interpretable DCNN model. First, the authors design a BTSC module, in which each parent node generates two independent child layers, and then integrate them into a normal DCNN model. The main advantages of the BTSC are as follows: 1) child nodes of the different parent nodes do not interfere with each other; 2) parent and child nodes can inherit knowledge. Second, considering the activation level of neurons, the authors design an information coding objective to guide neural nodes to learn the particular information coding that is expected. Through the experiments, the authors can verify that: 1) the decision‐making made by both the ResNet and DenseNet models can be explained well based on the "decision information flow path" (known as the decision‐path) formed in the BTSC module; 2) the decision‐path can reasonably interpret the decision reversal mechanism (Robustness mechanism) of the DCNN model; 3) the credibility of decision‐making can be measured by the matching degree between the actual and expected decision‐path.
Publisher
Institution of Engineering and Technology (IET)