Categorical-Parallel Adversarial Defense for Perception Models on Single-Board Embedded Unmanned Vehicles
Author:
Li Yilan1ORCID, Fan Xing1ORCID, Sun Shiqi2, Lu Yantao2, Liu Ning3
Affiliation:
1. School of Computer Science and Engineering, Xi’an University of Technology, Xi’an 710048, China 2. School of Computer Science, Northwestern Polytechnical University, Xi’an 710060, China 3. Midea Group, Beijing 100070, China
Abstract
Significant advancements in robustness against input perturbations have been realized for deep neural networks (DNNs) through the application of adversarial training techniques. However, implementing these methods for perception tasks in unmanned vehicles, such as object detection and semantic segmentation, particularly on real-time single-board computing devices, encounters two primary challenges: the time-intensive nature of training large-scale models and performance degradation due to weight quantization in real-time deployments. To address these challenges, we propose Ca-PAT, an efficient and effective adversarial training framework designed to mitigate perturbations. Ca-PAT represents a novel approach by integrating quantization effects into adversarial defense strategies specifically for unmanned vehicle perception models on single-board computing platforms. Notably, Ca-PAT introduces an innovative categorical-parallel adversarial training mechanism for efficient defense in large-scale models, coupled with an alternate-direction optimization framework to minimize the adverse impacts of weight quantization. We conducted extensive experiments on various perception tasks using the Imagenet-te dataset and data collected from physical unmanned vehicle platforms. The results demonstrate that the Ca-PAT defense framework significantly outperforms state-of-the-art baselines, achieving substantial improvements in robustness across a range of perturbation scenarios.
Funder
National Natural Science Foundation of China Natural Science Basic Research Program of Shaanxi Province
Reference36 articles.
1. Goodfellow, I.J., Shlens, J., and Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv. 2. Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., and Swami, A. (2017, January 2–6). Practical black-box attacks against machine learning. Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, Abu Dhabi, United Arab Emirates. 3. Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., and Song, D. (2018, January 18–23). Robust physical-world attacks on deep learning visual classification. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA. 4. Kurakin, A., Goodfellow, I.J., and Bengio, S. (2018). Adversarial examples in the physical world. Artificial Intelligence Safety and Security, Chapman and Hall/CRC. 5. Buckman, J., Roy, A., Raffel, C., and Goodfellow, I. (30–3, January 30). Thermometer encoding: One hot way to resist adversarial examples. Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada.
|
|