Attacking Deep Learning AI Hardware with Universal Adversarial Perturbation
-
Published:2023-09-19
Issue:9
Volume:14
Page:516
-
ISSN:2078-2489
-
Container-title:Information
-
language:en
-
Short-container-title:Information
Author:
Sadi Mehdi1, Talukder Bashir Mohammad Sabquat Bahar2, Mishty Kaniz1, Rahman Md Tauhidur2ORCID
Affiliation:
1. Department of Electrical and Computer Engineering, Auburn University, Auburn, AL 36849, USA 2. Department of Electrical and Computer Engineering, Florida International University, Miami, FL 33199, USA
Abstract
Universal adversarial perturbations are image-agnostic and model-independent noise that, when added to any image, can mislead the trained deep convolutional neural networks into the wrong prediction. Since these universal adversarial perturbations can seriously jeopardize the security and integrity of practical deep learning applications, the existing techniques use additional neural networks to detect the existence of these noises at the input image source. In this paper, we demonstrate an attack strategy that, when activated by rogue means (e.g., malware, trojan), can bypass these existing countermeasures by augmenting the adversarial noise at the AI hardware accelerator stage. We demonstrate the accelerator-level universal adversarial noise attack on several deep learning models using co-simulation of the software kernel of the Conv2D function and the Verilog RTL model of the hardware under the FuseSoC environment.
Funder
National Science Foundation
Subject
Information Systems
Reference49 articles.
1. (2023, July 01). Google Deepmind. Available online: https://www.deepmind.com/. 2. Efficient processing of deep neural networks: A tutorial and survey;Sze;Proc. IEEE,2017 3. Jouppi, N.P., Young, C., Patil, N., Patterson, D., Agrawal, G., Bajwa, R., Bates, S., Bhatia, S., Boden, N., and Borchers, A. (2017, January 24–28). In-datacenter performance analysis of a tensor processing unit. Proceedings of the 44th Annual International Symposium on Computer Architecture, Toronto, ON, Canada. 4. (2023, July 01). Intel VPUs. Available online: https://www.intel.com/content/www/ 601 us/en/products/details/processors/movidius-vpu.html. 5. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. (2014). Intriguing properties of neural networks. arXiv.
|
|