Time series adversarial attacks: an investigation of smooth perturbations and defense approaches
-
Published:2023-10-24
Issue:
Volume:
Page:
-
ISSN:2364-415X
-
Container-title:International Journal of Data Science and Analytics
-
language:en
-
Short-container-title:Int J Data Sci Anal
Author:
Pialla Gautier, Ismail Fawaz Hassan, Devanne MaximeORCID, Weber Jonathan, Idoumghar Lhassane, Muller Pierre-Alain, Bergmeir Christoph, Schmidt Daniel F., Webb Geoffrey I., Forestier GermainORCID
Abstract
AbstractAdversarial attacks represent a threat to every deep neural network. They are particularly effective if they can perturb a given model while remaining undetectable. They have been initially introduced for image classifiers, and are well studied for this task. For time series, few attacks have yet been proposed. Most that have are adaptations of attacks previously proposed for image classifiers. Although these attacks are effective, they generate perturbations containing clearly discernible patterns such as sawtooth and spikes. Adversarial patterns are not perceptible on images, but the attacks proposed to date are readily perceptible in the case of time series. In order to generate stealthier adversarial attacks for time series, we propose a new attack that produces smoother perturbations. We introduced a function to measure the smoothness for time series. Using it, we find that smooth perturbations are harder to detect both visually, by the naked eye and by deep learning models. We also show two ways of protection against adversarial attacks: the first one by detecting the attacks using a deep model; the second one by using adversarial training to improve the robustness of a model against a specific attack, thus making it less vulnerable.
Publisher
Springer Science and Business Media LLC
Subject
Applied Mathematics,Computational Theory and Mathematics,Computer Science Applications,Modeling and Simulation,Information Systems
Reference24 articles.
1. Alam, M.S., Kwon, K.C., Alam, M.A., Abbass, M.Y., Imtiaz, S.M., Kim, N.: Trajectory-based air-writing recognition using deep neural network and depth sensor. Sensors (2020). https://doi.org/10.3390/s20020376 2. Bhambri, S., Muku, S., Tulasi, A., Buduru, A.B.: A survey of black-box adversarial attacks on computer vision models (2020) 3. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (sp), pp. 39–57. IEEE (2017) 4. Dau, H.A., Bagnall, A., Kamgar, K., Yeh, C.C.M., Zhu, Y., Gharghabi, S., Ratanamahatana, C.A., Keogh, E.: The UCR time series archive (2019) 5. Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., Li, J.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 9185–9193 (2018)
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|