Author:
Chen Yuhan, ,Du Xia,Wang Dahan,Wu Yun,Zhu Shunzhi,Yan Yan,
Publisher
Aerospace Information Research Institute, Chinese Academy of Sciences
Reference36 articles.
1. Anish A, Carlini N and Wagner D. 2018. Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples//Proceedings of the 35th International Conference on Machine Learning (ICML 2018). Stockholm, Sweden: PMLR:274-283
2. Alzantot M, Sharma Y, Elgohary A, Ho B J, Srivastava M B and Chang K W. 2018. Generating natural language adversarial examples//Proceeding of the Empirical Methods in Natural Language Processing (EMNLP 2018). Brussels, Belgium: Association for Computational Linguistics:2890-2896 [DOI: 10.18653/v1/d18-1316]
3. Behjati M, Moosavi-Dezfooli S M, Baghshah M S and Frossard P. 2019. Universal adversarial attacks on text classifiers//Proceedings of 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Brighton, UK: IEEE:7345-7349 [DOI: 10.1109/ICASSP.2019.8682430]
4. Bajaj A and Vishwakarma D K. 2023. Evading text based emotion detection mechanism via adversarial attacks. Neurocomputing, 558: #126787 [DOI: 10.1016/J.NEUCOM.2023.126787]
5. Cer D, Yang Y F, Kong S Y, Hua N, Limtiaco N, John R S, Constant N, Guajardo-Cespedes M, Yuan S, Tar C, Strope B and Kurzweil R. 2018. Universal sentence encoder for English//Proceedings of 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Brussels, Belgium: Association for Computational Linguistics:169-174 [DOI: 10.18653/v1/d18-2029]