Abstract
AbstractDeep neural networks are vulnerable to attacks from adversarial inputs. Corresponding attack research on human pose estimation (HPE), particularly for body joint detection, has been largely unexplored. Transferring classification-based attack methods to body joint regression tasks is not straightforward. Another issue is that the attack effectiveness and imperceptibility contradict each other. To solve these issues, we propose local imperceptible attacks on HPE networks. In particular, we reformulate imperceptible attacks on body joint regression into a constrained maximum allowable attack. Furthermore, we approximate the solution using iterative gradient-based strength refinement and greedy-based pixel selection. Our method crafts effective perceptual adversarial attacks that consider both human perception and attack effectiveness. We conducted a series of imperceptible attacks against state-of-the-art HPE methods, including HigherHRNet, DEKR, and ViTPose. The experimental results demonstrate that the proposed method achieves excellent imperceptibility while maintaining attack effectiveness by significantly reducing the number of perturbed pixels. Approximately 4% of the pixels can achieve sufficient attacks on HPE.
Funder
National Natural Science Foundation of China
Natural Science Foundation of Zhejiang Province
Publisher
Springer Science and Business Media LLC
Subject
Computer Graphics and Computer-Aided Design,Computer Vision and Pattern Recognition,Visual Arts and Performing Arts,Medicine (miscellaneous),Computer Science (miscellaneous),Software
Reference29 articles.
1. Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: Proceedings of the 2017 IEEE symposium on security and privacy, IEEE, San Jose, 22-26 May 2017. https://doi.org/10.1109/SP.2017.49
2. Kurakin A, Goodfellow IJ, Bengio S (2018) Adversarial examples in the physical world. In: Yampolskiy RV (ed) In Artificial intelligence safety and security, 1st edn. Taylor & Francis Group, New York. https://doi.org/10.1201/9781351251389-8
3. Su JW, Vargas DV, Sakurai K (2019) One pixel attack for fooling deep neural networks. IEEE Trans Evol Comput 23(5):828-841. https://doi.org/10.1109/TEVC.2019.2890858
4. Kurakin A, Goodfellow I, Bengio S (2016) Adversarial machine learning at scale. arXiv, 2016. https://doi.org/10.48550/arXiv.1611.01236
5. Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: Proceedings of the 6th international conference on learning representations, OpenReview.net, Vancouver, 30 April-3 May 2018