Author:
Wu Lijin,Huang Jianye,He Jindong,Lin Nan,Liao Feilong,Hou Jiaye
Publisher
Springer Nature Singapore
Reference33 articles.
1. Athalye, A., et al.: Synthesizing robust adversarial examples. In: International Conference on Machine Learning, pp. 284–293. PMLR (2018)
2. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)
3. Chen, Z., et al.: SigNet: a novel deep learning framework for radio signal classification. In: IEEE Transactions on Cognitive Communications and Networking (2021)
4. Dong, Y., et al.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9185–9193 (2018)
5. Dong, Y., et al.: Evading defenses to transferable adversarial examples by translation-invariant attacks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4312–4321 (2019)