Author:
Yuan Hao,Chen Yongjun,Hu Xia,Ji Shuiwang
Abstract
Interpreting deep neural networks is of great importance to understand and verify deep models for natural language processing (NLP) tasks. However, most existing approaches only focus on improving the performance of models but ignore their interpretability. In this work, we propose an approach to investigate the meaning of hidden neurons of the convolutional neural network (CNN) models. We first employ saliency map and optimization techniques to approximate the detected information of hidden neurons from input sentences. Then we develop regularization terms and explore words in vocabulary to interpret such detected information. Experimental results demonstrate that our approach can identify meaningful and reasonable interpretations for hidden spatial locations. Additionally, we show that our approach can describe the decision procedure of deep NLP models.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
16 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Arabic News Summarization based on T5 Transformer Approach;2023 14th International Conference on Information and Communication Systems (ICICS);2023-11-21
2. From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI;ACM Computing Surveys;2023-07-13
3. Towards Improved and Interpretable Deep Metric Learning via Attentive Grouping;IEEE Transactions on Pattern Analysis and Machine Intelligence;2023-01-01
4. Combining deep ensemble learning and explanation for intelligent ticket management;Expert Systems with Applications;2022-11
5. Interpreting Deep Networks;Proceedings of the 2022 6th International Conference on Electronic Information Technology and Computer Engineering;2022-10-21