Boosting Adversarial Training Using Robust Selective Data Augmentation
-
Published:2023-05-20
Issue:1
Volume:16
Page:
-
ISSN:1875-6883
-
Container-title:International Journal of Computational Intelligence Systems
-
language:en
-
Short-container-title:Int J Comput Intell Syst
Author:
Rasheed BaderORCID, Masood Khattak Asad, Khan Adil, Protasov Stanislav, Ahmad Muhammad
Abstract
AbstractArtificial neural networks are currently applied in a wide variety of fields, and they are near to achieving performance similar to humans in many tasks. Nevertheless, they are vulnerable to adversarial attacks in the form of a small intentionally designed perturbation, which could lead to misclassifications, making these models unusable, especially in applications where security is critical. The best defense against these attacks, so far, is adversarial training (AT), which improves the model’s robustness by augmenting the training data with adversarial examples. In this work, we show that the performance of AT can be further improved by employing the neighborhood of each adversarial example in the latent space to make additional targeted augmentations to the training data. More specifically, we propose a robust selective data augmentation (RSDA) approach to enhance the performance of AT. RSDA complements AT by inspecting the quality of the data from a robustness perspective and performing data transformation operations on specific neighboring samples of each adversarial sample in the latent space. We evaluate RSDA on MNIST and CIFAR-10 datasets with multiple adversarial attacks. Our experiments show that RSDA gives significantly better results than just AT on both adversarial and clean samples.
Publisher
Springer Science and Business Media LLC
Subject
Computational Mathematics,General Computer Science
Reference27 articles.
1. Neu, D.A., Lahann, J., Fettke, P.: A systematic literature review on state-of-the-art deep learning methods for process prediction. Artif. Intell. Rev. 1–27 (2021). https://doi.org/10.1007/s10462-021-09960-8. arXiv:2101.09320 2. Khattak, A., Khan, A., Ullah, H., Asghar, M.U., Arif, A., Kundi, F.M., Asghar, M.Z.: An efficient supervised machine learning technique for forecasting stock market trends. EAI Springer Innov. Commun. Comput. (2022). https://doi.org/10.1007/978-3-030-75123-4_7 3. Rasheed, B., Khan, A., Kazmi, S.M.A., Hussain, R., Piran, M.J., Suh, D.Y.: Adversarial attacks on featureless deep learning malicious URLs detection. Comput. Mater. Contin. 68(1), 921–939 (2021). https://doi.org/10.32604/cmc.2021.015452 4. Papernot, N., McDaniel, P., Goodfellow, I.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples (2016). arXiv:1605.07277 5. Arrieta, A.B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R.: Explainable artificial intelligence (xai): concepts, taxonomies, opportunities and challenges toward responsible ai. Inf. Fusion 58, 82–115 (2020)
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|