1. Attacks which do not kill training make adversarial learning stronger;zhang;International Conference on Machine Learning,2020
2. Muldef:Multi-model-based defense against adversarial examples for neural networks;srisakaokul;arXiv preprint arXiv 1809 00065,2018
3. Robustness verification for transformers;shi;ICLR 2020,0
4. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples;papernot;CoRR,2016