A Survey of Robustness and Safety of 2D and 3D Deep Learning Models against Adversarial Attacks

Author:

Li Yanjie1ORCID,Xie Bin1ORCID,Guo Songtao2ORCID,Yang Yuanyuan3ORCID,Xiao Bin1ORCID

Affiliation:

1. The Hong Kong Polytechnic University, Hong Kong

2. Chongqing University, China

3. Stony Brook University, USA

Abstract

Benefiting from the rapid development of deep learning, 2D and 3D computer vision applications are deployed in many safe-critical systems, such as autopilot and identity authentication. However, deep learning models are not trustworthy enough because of their limited robustness against adversarial attacks. The physically realizable adversarial attacks further pose fatal threats to the application and human safety. Lots of papers have emerged to investigate the robustness and safety of deep learning models against adversarial attacks. To lead to trustworthy AI, we first construct a general threat model from different perspectives and then comprehensively review the latest progress of both 2D and 3D adversarial attacks. We extend the concept of adversarial examples beyond imperceptive perturbations and collate over 170 papers to give an overview of deep learning model robustness against various adversarial attacks. To the best of our knowledge, we are the first to systematically investigate adversarial attacks for 3D models, a flourishing field applied to many real-world applications. In addition, we examine physical adversarial attacks that lead to safety violations. Last but not least, we summarize present popular topics, give insights on challenges, and shed light on future research on trustworthy AI.

Funder

HK RGC GRF

Publisher

Association for Computing Machinery (ACM)

Subject

General Computer Science,Theoretical Computer Science

Reference178 articles.

1. Akshay Agarwal, Mayank Vatsa, Richa Singh, and Nalini K. Ratha. 2020. Noise is inside me! generating adversarial perturbations with noise derived from natural filters. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 774–775.

2. There are no bit parts for sign bits in black-box attacks;Al-Dujaili Abdullah;arXiv preprint arXiv:1902.06894,2019

3. Rima Alaifari, Giovanni S. Alberti, and Tandri Gauksson. 2018. ADef: An iterative algorithm to construct adversarial deformations. In International Conference on Learning Representations.

4. Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, and Matthias Hein. 2020. Square attack: A query-efficient black-box adversarial attack via random search. In European Conference on Computer Vision. Springer, 484–501.

5. Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International Conference on Machine Learning. PMLR, 274–283.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3