Affiliation:
1. School of Cyberspace Security, Beijing University of Posts and Telecommunications, Beijing, P. R. China
Abstract
Artificial intelligence is developing rapidly in the direction of intellectualization and humanization. Recent studies have shown the vulnerability of many deep learning models to adversarial examples, but there are fewer studies on adversarial examples attacking facial expression recognition systems. Human–computer interaction requires facial expression recognition, so the security demands of artificial intelligence humanization should be considered. Inspired by facial expression recognition, we want to explore the characteristics of facial expression recognition adversarial examples. In this paper, we are the first to study facial expression adversarial examples (FEAEs) and propose an adversarial attack method on facial expression recognition systems, a novel measurement method on the adversarial hardness of FEAEs, and two evaluation metrics on FEAE transferability. The experimental results illustrate that our approach is superior to other gradient-based attack methods. Finding FEAEs can attack not only facial expression recognition systems but also face recognition systems. The transferability and adversarial hardness of FEAEs can be measured effectively and accurately.
Funder
Key Technology Research and Development Program of Shandong
Publisher
World Scientific Pub Co Pte Ltd
Subject
Artificial Intelligence,Computer Vision and Pattern Recognition,Software
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献