Author:
Xu Ruzhi,Li Min,Yang Xin,Liu Dexin,Chen Dawei
Abstract
The adversarial examples make the object detection model make a wrong judgment, which threatens the security of driverless cars. In this paper, by improving the Momentum Iterative Fast Gradient Sign Method (MI-FGSM), based on ensemble learning, combined with L∞ perturbation and spatial transformation, a strong transferable black-box adversarial attack algorithm for object detection model of driverless cars is proposed. Through a large number of experiments on the nuScenes driverless dataset, it is proved that the adversarial attack algorithm proposed in this paper have strong transferability, and successfully make the mainstream object detection models such as FasterRcnn, SSD, YOLOv3 make wrong judgments. Based on the adversarial attack algorithm proposed in this paper, the parametric noise injection with adversarial training is performed to generate a defense model with strong robustness. The defense model proposed in this paper significantly improves the robustness of the object detection model. It can effectively alleviate various adversarial attacks against the object detection model of driverless cars, and does not affect the accuracy of clean samples. This is of great significance for studying the application of object detection model of driverless cars in the real physical world.