Adversarial Robustness in Deep Neural Networks based on Variable Attributes Stochastic Ensemble Model

Author:

Qin Ruoxi1,Wang Linyuan1,Du Xuehui1,Xie Pengfei1,Chen Xingyuan1,Yan Bin1

Affiliation:

1. PLA Strategy Support Force Information Engineering University

Abstract

Abstract Deep neural networks (DNN) have been shown to suffer from critical vulnerabilities under adversarial attacks. This phenomenon stimulated the creation of different attack and defense strategies similar to those adopted in cyberspace security. The dependence of such strategies on attack and defense mechanisms makes the associated algorithms on both sides appear as closely reciprocating processes, where the defense method are particularly passive in these processes. Inspired by the dynamic defense approach proposed in cyberspace to solve this endless arm race, this paper defines the model order, network structure and smoothing parameters as ensemble variable attributes and proposes a stochastic strategy to builds upon ensemble model through heterogeneous and redundancy models. The proposed method introduces the diversity and randomness characteristic to deep neural network to change the fixed correspondence gradient between input and output. The unpredictability and diversity of gradients makes it impossible for attackers to directly implement white-box attacks so as to handle the extreme transferability and vulnerability of ensemble models under white-box attacks. Experimental comparison of ASR-vs-distortion curves with different attack scenarios shows that even the attacker with the highest attack capability cannot easily exceed the attack success rate associated with the ensemble smoothed model, especially under untargeted attacks.

Publisher

Research Square Platform LLC

Reference74 articles.

1. Ka Ho Chow and Wenqi Wei and Yanzhao Wu and Ling Liu (2019) Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks. {IEEE}, https://doi.org/10.1109/BigData47090.2019.9006090, Fri, 06 Mar 2020 00:00:00 +0100, 1282--1291, 10.1109/BigData47090.2019.9006090, 2019 {IEEE} International Conference on Big Data (Big Data), Los Angeles, CA, USA, December 9-12, 2019, https://dblp.org/rec/conf/bigdataconf/ChowWW019.bib, dblp computer science bibliography, https://dblp.org

2. Ka Ho Chow and Wenqi Wei and Yanzhao Wu and Ling Liu (2019) Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks. {IEEE}, https://doi.org/10.1109/BigData47090.2019.9006090, Fri, 06 Mar 2020 00:00:00 +0100, 1282--1291, 10.1109/BigData47090.2019.9006090, 2019 {IEEE} International Conference on Big Data (Big Data), Los Angeles, CA, USA, December 9-12, 2019, https://dblp.org/rec/conf/bigdataconf/ChowWW019.bib, dblp computer science bibliography, https://dblp.org

3. Xu, Weilin and Evans, David and Qi, Yanjun (2017) Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155 https://arxiv.org/abs/1704.01155

4. Bhagoji, Arjun Nitin and Cullina, Daniel and Sitawarin, Chawin and Mittal, Prateek (2018) Enhancing robustness of machine learning systems via data transformations. 1--5, IEEE, 2018 52nd Annual Conference on Information Sciences and Systems (CISS)

5. Meng, Dongyu and Chen, Hao (2017) Magnet: a two-pronged defense against adversarial examples. 135--147, Proceedings of the 2017 ACM SIGSAC conference on computer and communications security

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3