1. Adversarial attacks and defenses in Speaker Recognition Systems: A survey
2. Anish Athalye , Nicholas Carlini , and David Wagner . 2018 . Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples . In Proceedings of the 35th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 80) , Jennifer Dy and Andreas Krause (Eds.). PMLR, 274–283. Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. In Proceedings of the 35th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 80), Jennifer Dy and Andreas Krause (Eds.). PMLR, 274–283.
3. Ho Bae Jaehee Jang Dahuin Jung Hyemi Jang Heonseok Ha and Sungroh Yoon. 2018. Security and Privacy Issues in Deep Learning. CoRR abs/1807.11655(2018). arXiv:1807.11655 Ho Bae Jaehee Jang Dahuin Jung Hyemi Jang Heonseok Ha and Sungroh Yoon. 2018. Security and Privacy Issues in Deep Learning. CoRR abs/1807.11655(2018). arXiv:1807.11655
4. Eugene Bagdasaryan , Andreas Veit , Yiqing Hua , Deborah Estrin , and Vitaly Shmatikov . 2020 . How To Backdoor Federated Learning . In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics(Proceedings of Machine Learning Research, Vol. 108) , Silvia Chiappa and Roberto Calandra (Eds.). PMLR, 2938–2948. Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2020. How To Backdoor Federated Learning. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics(Proceedings of Machine Learning Research, Vol. 108), Silvia Chiappa and Roberto Calandra (Eds.). PMLR, 2938–2948.