Affiliation:
1. Institute of Information Technology PLA Strategic Support Force Information Engineering University Zhengzhou 450000 China
2. National Digital Switching System Engineering Technology Research Center Zhengzhou 450000 China
Abstract
Vulnerability to adversarial examples poses a significant challenge to the secure application of deep neural networks. Adversarial training and its variants have shown great potential in addressing this problem. However, such approaches, which directly optimize the decision boundary, often result in overly complex adversarial decision boundaries that are detrimental to generalization. To deal with this issue, a novel plug‐and‐play method known as Misclassification‐Aware Contrastive Adversarial Training (MA‐CAT) from the perspective of data distribution optimization is proposed. MA‐CAT leverages supervised decoupled contrastive learning to cluster nature examples within the same class in the logit space, indirectly increasing the margins of examples. Moreover, by taking into account the varying difficulty levels of adversarial training for different examples, MA‐CAT adaptively customizes the strength of adversarial training for each example using an instance‐wise misclassification‐aware adaptive temperature coefficient. Extensive experiments on the CIFAR‐10, CIFAR‐100, and SVHN datasets demonstrate that MA‐CAT can be easily integrated into existing models and significantly improves robustness with minimal computational cost.
Reference40 articles.
1. C.Szegedy W.Zaremba I.Sutskever J.Bruna D.Erhan I.Goodfellow R.Fergus CoRR abs/1312.61992013.
2. B.Biggio I.Corona D.Maiorca B.Nelson N.Šrndić P.Laskov G.Giacinto F.Roli inMachine Learning and Knowledge Discovery in Databases: European Conf. ECML PKDD 2013 Proc. Part III 13 Springer Prague Czech Republic September2013 pp.387–402.
3. S.Bubeck Y. T.Lee E.Price I.Razenshteyn inInt. Conf. Machine Learning PMLR California USA2019 pp.831–840.
4. G.Apruzzese M.Colajanni L.Ferretti M.Marchetti in2019 11th Int. Conf. Cyber Conflict (CyCon) Vol.900 IEEE Piscataway NJ2019 pp.1–18.
5. The security of machine learning in an adversarial setting: A survey