INNT: Restricting Activation Distance to Enhance Consistency of Visual Interpretation in Neighborhood Noise Training
-
Published:2023-11-23
Issue:23
Volume:12
Page:4751
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Wang Xingyu1ORCID, Ma Rui1ORCID, He Jinyuan1ORCID, Zhang Taisi1ORCID, Wang Xiajing2ORCID, Xue Jingfeng1ORCID
Affiliation:
1. Beijing Key Laboratory of Software Security Engineering Technique, Beijing Institute of Technology, Beijing 100081, China 2. Experimental College, The Open University of China, Beijing 100081, China
Abstract
In this paper, we propose an end-to-end interpretable neighborhood noise training framework (INNT) to address the issue of inconsistent interpretations between clean and noisy samples in noise training. Noise training conventionally involves incorporating noisy samples into the training set, followed by generalization training. However, visual interpretations suggest that models may be learning the noise distribution rather than the desired robust target features. To mitigate this problem, we reformulate the noise training objective to minimize the visual interpretation consistency of images in the sample neighborhood. We design a noise activation distance constraint regularization term to enforce the similarity of high-level feature maps between clean and noisy samples. Additionally, we enhance the structure of noise training by iteratively resampling noise to more accurately depict the sample neighborhood. Furthermore, neighborhood noise is introduced to achieve more intuitive sample neighborhood sampling. Finally, we conducted qualitative and quantitative tests on different CNN architectures and public datasets. The results indicate that INNT leads to a more consistent decision rationale and balances the accuracy between noisy and clean samples.
Funder
National Natural Science Foundation of China Major Scientific and Technological Innovation Projects of Shandong Province
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Reference41 articles.
1. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. (2014, January 14–16). Intriguing properties of neural networks. Proceedings of the 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada. 2. Explaining and Harnessing Adversarial Examples;Goodfellow;Stat,2015 3. Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., and Madry, A. (2019, January 6–9). Robustness May Be at Odds with Accuracy. Proceedings of the International Conference on Learning Representations, New Orleans, LO, USA. 4. Rusak, E., Schott, L., Zimmermann, R.S., Bitterwolf, J., Bringmann, O., Bethge, M., and Brendel, W. (2020, January 23–28). A simple way to make neural networks robust against diverse image corruptions. Proceedings of the Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK. Proceedings, Part III 16. 5. Dodge, S., and Karam, L. (2016, January 6–8). Understanding how image quality affects deep neural networks. Proceedings of the 2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX), Lisbon, Portugal.
|
|