Publisher
Springer Nature Switzerland
Reference15 articles.
1. Athalye, A., Carlini, N., Wagner, D.: Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples (2018). https://doi.org/10.48550/arXiv.1802.00420
2. Bacciu, D., et al.: TEACHING - trustworthy autonomous cyber-physical applications through human-centred intelligence. In: 2021 IEEE International Conference on Omni-Layer Intelligent Systems (COINS), pp. 1–6 (2021). https://doi.org/10.1109/COINS51742.2021.9524099
3. Bai, T., Luo, J., Zhao, J., Wen, B., Wang, Q.: Recent Advances in Adversarial Training for Adversarial Robustness (2021). https://doi.org/10.48550/arXiv.2102.01356
4. Carlini, N., et al.: On Evaluating Adversarial Robustness (2019). https://doi.org/10.48550/arXiv.1902.06705
5. Carmon, Y., Raghunathan, A., Schmidt, L., Duchi, J.C., Liang, P.S.: Unlabeled data improves adversarial robustness. In: Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019)
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献