1. Zhong, N., Qian, Z., and Zhang, X. (2021, January 5–9). Undetectable adversarial examples based on microscopical regularization. Proceedings of the 2021 IEEE International Conference on Multimedia and Expo (ICME), Shenzhen, China.
2. Athalye, A., Engstrom, L., Ilyas, A., and Kwok, K. (2018, January 10–15). Synthesizing robust adversarial examples. Proceedings of the International Conference on Machine Learning, PMLR, Stockholm Sweden.
3. Wu, L., Zhu, Z., Tai, C., and Ee, W. (2018). Understanding and enhancing the transferability of adversarial examples. arXiv.
4. Bhambri, S., Muku, S., Tulasi, A., and Buduru, A.B. (2019). A survey of black-box adversarial attacks on computer vision models. arXiv.
5. Chen, X., Weng, J., Deng, X., Luo, W., Lan, Y., and Tian, Q. (2021). Feature Distillation in Deep Attention Network Against Adversarial Examples. IEEE Trans. Neural Netw. Learn. Syst.