Affiliation:
1. Universitat Politècnica de Catalunya, Barcelona, Catalunya, Spain
2. Universidad Nacional Autónoma de México, Mexico City, Mexico
3. German Aerospace Center (DLR), Cologne, Germany
4. University of Cologne, Cologne, Germany
Abstract
Background
When using deep learning models, one of the most critical vulnerabilities is their exposure to adversarial inputs, which can cause wrong decisions (e.g., incorrect classification of an image) with minor perturbations. To address this vulnerability, it becomes necessary to retrain the affected model against adversarial inputs as part of the software testing process. In order to make this process energy efficient, data scientists need support on which are the best guidance metrics for reducing the adversarial inputs to create and use during testing, as well as optimal dataset configurations.
Aim
We examined six guidance metrics for retraining deep learning models, specifically with convolutional neural network architecture, and three retraining configurations. Our goal is to improve the convolutional neural networks against the attack of adversarial inputs with regard to the accuracy, resource utilization and execution time from the point of view of a data scientist in the context of image classification.
Method
We conducted an empirical study using five datasets for image classification. We explore: (a) the accuracy, resource utilization, and execution time of retraining convolutional neural networks with the guidance of six different guidance metrics (neuron coverage, likelihood-based surprise adequacy, distance-based surprise adequacy, DeepGini, softmax entropy and random), (b) the accuracy and resource utilization of retraining convolutional neural networks with three different configurations (one-step adversarial retraining, adversarial retraining and adversarial fine-tuning).
Results
We reveal that adversarial retraining from original model weights, and by ordering with uncertainty metrics, gives the best model w.r.t. accuracy, resource utilization, and execution time.
Conclusions
Although more studies are necessary, we recommend data scientists use the above configuration and metrics to deal with the vulnerability to adversarial inputs of deep learning models, as they can improve their models against adversarial inputs without using many inputs and without creating numerous adversarial inputs. We also show that dataset size has an important impact on the results.
Reference72 articles.
1. Testing deep learning models: a first comparative study of multiple testing techniques;Ahuja;ArXiv preprint,2022
2. Software engineering for machine learning: a case study;Amershi,2019
3. Towards adversarially robust continual learning;Bai;ArXiv preprint,2023
4. Intel image classification;Bansal,2019
5. The oracle problem in software testing: a survey;Barr;IEEE Transactions on Software Engineering,2014
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Towards green AI-based software systems: an architecture-centric approach (GAISSA);2023 49th Euromicro Conference on Software Engineering and Advanced Applications (SEAA);2023-09-06