1. Attacks which do not kill training make adversarial learning stronger;zhang;ICML 2020,0
2. Theoretically principled trade-off between robustness and accuracy;zhang;ICML 2019,0
3. You only propagate once: Accelerating adversarial training via maximal principle;zhang;NeurIPS,0
4. Wide residual networks;zagoruyko;BMVC 2016,0
5. Regularizing neural networks via adversarial model perturbation;zheng;ArXiv org,2020