1. 2008. Kolmogorov--Smirnov Test. Springer New York.
2. Aleksander Madry at al. 2017. Towards Deep Learning Models Resistant to Adversarial Attacks. arXiv preprint (2017).
3. N. Carlini and D. Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In IEEE Symp. on Security and Privacy (SP).
4. Nicholas Carlini and David Wagner. 2018. Audio adversarial examples: Targeted attacks on speech-to-text. In Deep Learning and Security Workshop.
5. Christian Szegedy et al. 2014. Intriguing properties of neural networks. In Intern. Conf. on Learning Representations.