Funder
National Natural Science Foundation of China
Guangxi University
Reference53 articles.
1. S. Garg, A. Kumar, V. Goel, Y. Liang, Can adversarial weight perturbations inject neural backdoors, in: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 2020, pp. 2029–2032.
2. Adversarial targeted forgetting in regularization and generative based continual learning models;Umer,2021
3. Badnets: Identifying vulnerabilities in the machine learning model supply chain;Gu,2017
4. Y. Yao, H. Li, H. Zheng, B.Y. Zhao, Latent backdoor attacks on deep neural networks, in: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019, pp. 2041–2055.
5. Fine-pruning: Defending against backdooring attacks on deep neural networks;Liu,2018
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献