Affiliation:
1. University of Electronic Science & Technology of China, Chengdu, Sichuan, China
Abstract
Great progress has been made in deep learning over the past few years, which drives the deployment of deep learning–based applications into cyber-physical systems. But the lack of interpretability for deep learning models has led to potential security holes. Recent research has found that deep neural networks are vulnerable to well-designed input examples, called
adversarial examples
. Such examples are often too small to detect, but they completely fool deep learning models. In practice, adversarial attacks pose a serious threat to the success of deep learning. With the continuous development of deep learning applications, adversarial examples for different fields have also received attention. In this article, we summarize the methods of generating adversarial examples in computer vision, speech recognition, and natural language processing and study the applications of adversarial examples. We also explore emerging research and open problems.
Funder
National Natural Science Foundation of China
Research Fund of National Key Laboratory of Computer Architecture
Publisher
Association for Computing Machinery (ACM)
Subject
Artificial Intelligence,Control and Optimization,Computer Networks and Communications,Hardware and Architecture,Human-Computer Interaction
Reference119 articles.
1. Mahdieh Abbasi and Christian Gagné. 2017. Robustness to adversarial examples through an ensemble of specialists. arXiv:1702.06856. Mahdieh Abbasi and Christian Gagné. 2017. Robustness to adversarial examples through an ensemble of specialists. arXiv:1702.06856.
2. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
Cited by
11 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献