1. Threat of adversarial attacks on deep learning in computer vision: A survey;Akhtar;IEEE Access,2018
2. Anderson, P., He, X., Buehler, C., Teney, D., Johnson, M., Gould, S., Zhang, L., 2018. Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition. pp. 6077–6086.
3. Athalye, A., Carlini, N., Wagner, D., 2018. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. In: The 35th International Conference on Machine Learning. Vol. 80. ICML, pp. 274–283.
4. Bafna, M., Murtagh, J., Vyas, N., 2018. Thwarting Adversarial Examples: An L0-Robust Sparse Fourier Transform. In: The 32nd International Conference on Neural Information Processing Systems. NeurIPS, pp. 10096–0106.
5. Carlini, N., Wagner, D., 2017. Towards Evaluating the Robustness of Neural Networks. In: 2017 IEEE Symposium on Security and Privacy. SP, pp. 39–57.