Lightweight Privacy Protection via Adversarial Sample
-
Published:2024-03-26
Issue:7
Volume:13
Page:1230
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Xie Guangxu1ORCID, Hou Gaopan1ORCID, Pei Qingqi1ORCID, Huang Haibo2ORCID
Affiliation:
1. The State Key Laboratory of Integrated Services Networks, Xidian University, Xi’an 710071, China 2. School of Electrical & Information Engineering, Hubei University of Automotive Technology, Shiyan 442002, China
Abstract
Adversarial sample-based privacy protection has its own advantages compared to traditional privacy protections. Previous adversarial sample privacy protections have mostly been centralized or have not considered the issue of hardware device limitations when conducting privacy protection, especially on the user’s local device. This work attempts to reduce the requirements of adversarial sample privacy protections on devices, making the privacy protection more locally friendly. Adversarial sample-based privacy protections rely on deep learning models, which generally have a large number of parameters, posing challenges for deployment. Fortunately, the model structural pruning technique has been proposed, which can be employed to reduce the parameter count of deep learning models. Based on the model pruning technique Depgraph and existing adversarial sample privacy protections AttriGuard and MemGuard, we design two structural pruning-based adversarial sample privacy protections, in which the user obtains the perturbed data through the pruned deep learning model. Extensive experiments are conducted on four datasets, and the results demonstrate the effectiveness of our adversarial sample privacy protection based on structural pruning.
Funder
National Key Research and Development Program of China National Natural Science Foundation of China Key Research and Development Programs of Shaanxi
Reference33 articles.
1. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. (2013). Intriguing properties of neural networks. arXiv. 2. Jia, J., and Gong, N.Z. (2018, January 15–17). {AttriGuard}: A practical defense against attribute inference attacks via adversarial machine learning. Proceedings of the 27th USENIX Security Symposium (USENIX Security 18), Baltimore, MD, USA. 3. Jia, J., Salem, A., Backes, M., Zhang, Y., and Gong, N.Z. (2019, January 11–15). Memguard: Defending against black-box membership inference attacks via adversarial examples. Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, London, UK. 4. Shao, R., Shi, Z., Yi, J., Chen, P.Y., and Hsieh, C.J. (2022, January 17–20). Robust text captchas using adversarial examples. Proceedings of the 2022 IEEE International Conference on Big Data (Big Data), IEEE, Osaka, Japan. 5. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., and Swami, A. (2016, January 21–24). The limitations of deep learning in adversarial settings. Proceedings of the 2016 IEEE European Symposium on Security and Privacy (EuroS&P), IEEE, Saarbrücken, Germany.
Cited by
13 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|