Manifold-driven decomposition for adversarial robustness
-
Published:2024-01-11
Issue:
Volume:5
Page:
-
ISSN:2624-9898
-
Container-title:Frontiers in Computer Science
-
language:
-
Short-container-title:Front. Comput. Sci.
Author:
Zhang Wenjia,Zhang Yikai,Hu Xiaoling,Yao Yi,Goswami Mayank,Chen Chao,Metaxas Dimitris
Abstract
The adversarial risk of a machine learning model has been widely studied. Most previous studies assume that the data lie in the whole ambient space. We propose to take a new angle and take the manifold assumption into consideration. Assuming data lie in a manifold, we investigate two new types of adversarial risk, the normal adversarial risk due to perturbation along normal direction and the in-manifold adversarial risk due to perturbation within the manifold. We prove that the classic adversarial risk can be bounded from both sides using the normal and in-manifold adversarial risks. We also show a surprisingly pessimistic case that the standard adversarial risk can be non-zero even when both normal and in-manifold adversarial risks are zero. We finalize the study with empirical studies supporting our theoretical results. Our results suggest the possibility of improving the robustness of a classifier without sacrificing model accuracy, by only focusing on the normal adversarial risk.
Publisher
Frontiers Media SA
Reference41 articles.
1. “On robustness to adversarial examples and polynomial optimization,”;Awasthi;Advances in Neural Information Processing Systems,2019
2. “Towards evaluating the robustness of neural networks,”;Carlini,2017
3. Unlabeled data improves adversarial robustness;Carmon;arXiv,2019