INNT: Restricting Activation Distance to Enhance Consistency of Visual Interpretation in Neighborhood Noise Training

Author:

Wang Xingyu1ORCID,Ma Rui1ORCID,He Jinyuan1ORCID,Zhang Taisi1ORCID,Wang Xiajing2ORCID,Xue Jingfeng1ORCID

Affiliation:

1. Beijing Key Laboratory of Software Security Engineering Technique, Beijing Institute of Technology, Beijing 100081, China

2. Experimental College, The Open University of China, Beijing 100081, China

Abstract

In this paper, we propose an end-to-end interpretable neighborhood noise training framework (INNT) to address the issue of inconsistent interpretations between clean and noisy samples in noise training. Noise training conventionally involves incorporating noisy samples into the training set, followed by generalization training. However, visual interpretations suggest that models may be learning the noise distribution rather than the desired robust target features. To mitigate this problem, we reformulate the noise training objective to minimize the visual interpretation consistency of images in the sample neighborhood. We design a noise activation distance constraint regularization term to enforce the similarity of high-level feature maps between clean and noisy samples. Additionally, we enhance the structure of noise training by iteratively resampling noise to more accurately depict the sample neighborhood. Furthermore, neighborhood noise is introduced to achieve more intuitive sample neighborhood sampling. Finally, we conducted qualitative and quantitative tests on different CNN architectures and public datasets. The results indicate that INNT leads to a more consistent decision rationale and balances the accuracy between noisy and clean samples.

Funder

National Natural Science Foundation of China

Major Scientific and Technological Innovation Projects of Shandong Province

Publisher

MDPI AG

Subject

Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering

Reference41 articles.

1. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. (2014, January 14–16). Intriguing properties of neural networks. Proceedings of the 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada.

2. Explaining and Harnessing Adversarial Examples;Goodfellow;Stat,2015

3. Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., and Madry, A. (2019, January 6–9). Robustness May Be at Odds with Accuracy. Proceedings of the International Conference on Learning Representations, New Orleans, LO, USA.

4. Rusak, E., Schott, L., Zimmermann, R.S., Bitterwolf, J., Bringmann, O., Bethge, M., and Brendel, W. (2020, January 23–28). A simple way to make neural networks robust against diverse image corruptions. Proceedings of the Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK. Proceedings, Part III 16.

5. Dodge, S., and Karam, L. (2016, January 6–8). Understanding how image quality affects deep neural networks. Proceedings of the 2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX), Lisbon, Portugal.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3