Author:
Soni Kumari ,Degadwala Dr. Sheshang Degadwala
Abstract
Adversarial attacks pose a significant threat to the robustness and reliability of deep learning models, particularly in the context of transfer learning where pre-trained models are widely used. In this research, we propose a novel approach for detecting adversarial attacks on transfer learning models using pixel map analysis. By analyzing changes in pixel values at a granular level, our method aims to uncover subtle manipulations that are often overlooked by traditional detection techniques. We demonstrate the effectiveness of our approach through extensive experiments on various benchmark datasets, showcasing its ability to accurately detect adversarial attacks while maintaining high classification performance on clean data. Our findings highlight the importance of incorporating pixel map analysis into the defense mechanisms of transfer learning models to enhance their robustness against sophisticated adversarial threats.