Author:
Moëllic Pierre-Alain,Dumont Mathieu,Hector Kevin,Hennebert Christine,Joud Raphaël,Paulin Dylan
Abstract
AbstractThe large-scale deployment of machine learning models in a wide variety of AI-based systems raises major security concerns related to their integrity, confidentiality and availability. These security issues encompass the overall traditional machine learning pipeline, including the training and the inference processes. In the case of embedded models deployed in physically accessible devices, the attack surface is particularly complex because of additional attack vectors exploiting implementation-based flaws. This chapter aims at describing the most important attacks that threaten state-of-the-art embedded machine learning models (especially deep neural networks) widely deployed in IoT applications (e.g., health, industry, transport) and highlighting new critical attack vectors that rely on side-channel and fault injection analysis and significantly extend the attack surface of AIoT systems (Artificial Intelligence of Things). More particularly, we focus on two advanced threats against models deployed in 32-bit microcontrollers: model extraction and weight-based adversarial attacks.
Publisher
Springer Nature Switzerland
Reference56 articles.
1. Biggio, B., Roli, F.: Wild patterns: ten years after the rise of adversarial machine learning. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 2154–2156 (2018)
2. Al-Rubaie, M., Chang, J.M.: Privacy-preserving machine learning: threats and solutions. IEEE Secur. Priv. 17(2), 49–58 (2019)
3. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Berkay Celik, Z., Swami, A.: The limitations of deep learning in adversarial settings. In: IEEE European Symposium on Security and Privacy, pp. 399–414. IEEE (2016)
4. Tramer, Florian, Carlini, Nicholas, Brendel, Wieland, Madry, Aleksander: On adaptive attacks to adversarial example defenses. Adv. Neural Inf. Process. Syst. 33, 1633–1645 (2020)
5. Carlini, N., Jagielski, M., Mironov, I.: Cryptanalytic extraction of neural network models. In: Annual International Cryptology Conference, pp. 189–218. Springer (2020)