1. Ahmadreza Azizi, Ibrahim Asadullah Tahmid, Asim Waheed, Neal Mangaokar, Jiameng Pu, Mobin Javed, Chandan K. Reddy, and Bimal Viswanath. T-miner: A generative approach to defend against trojan attacks on DNN-based text classification. In Proceedings of USENIX Security Symposium (Security), 2021.
2. Eugene Bagdasaryan and Vitaly Shmatikov. Blind backdoors in deep learning models. In Proceedings of USENIX Security Symposium (Security), 2021.
3. Xiaoyi Chen, Ahmed Salem, Michael Backes, Shiqing Ma, and Yang Zhang. BadNL: Backdoor attacks against NLP models. In Annual Computer Security Applications Conference, pages 554–569, 2021.
4. Unicode Consortium. Confusables. [EB/OL], 2020. https://www.unicode.org/Public/security/13.0.0/ Accessed April. 20, 2021.
5. Jiazhu Dai, Chuanshuai Chen, and Yufeng Li. A backdoor attack against LSTM-Based text classification systems. IEEE Access, 7:138872–138878, 2019.