Interpretability and Transparency of Machine Learning in File Fragment Analysis with Explainable Artificial Intelligence
-
Published:2024-06-21
Issue:13
Volume:13
Page:2438
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Jinad Razaq1, Islam ABM1, Shashidhar Narasimha1
Affiliation:
1. Department of Computer Science, Sam Houston State University, Huntsville, TX 77341, USA
Abstract
Machine learning models are increasingly being used across diverse fields, including file fragment classification. As these models become more prevalent, it is crucial to understand and interpret their decision-making processes to ensure accountability, transparency, and trust. This research investigates the interpretability of four machine learning models used for file fragment classification through the lens of Explainable Artificial Intelligence (XAI) techniques. Specifically, we employ two prominent XAI methods, Shapley Additive Explanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), to shed light on the black-box nature of four machine learning models used for file fragment classification. By conducting a detailed analysis of the SHAP and LIME explanations, we demonstrate the effectiveness of these techniques in improving the interpretability of the models’ decision-making processes. Our analysis reveals that these XAI techniques effectively identify key features influencing each model’s predictions. The results also showed features that were critical to predicting specific classes. The ability to interpret and validate the decisions made by machine learning models in file fragment classification can enhance trust in these models and inform improvements for better accuracy and reliability. Our research highlights the importance of XAI techniques in promoting transparency and accountability in the application of machine learning models across diverse domains.
Reference45 articles.
1. Jinad, R., Islam, A., and Shashidhar, N. (2023, January 14–17). File Fragment Analysis Using Machine Learning. Proceedings of the 2023 IEEE International Conference on Dependable, Autonomic and Secure Computing, International Conference on Pervasive Intelligence and Computing, International Conference on Cloud and Big Data Computing, International Conference on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech), Abu Dhabi, United Arab Emirates. 2. Zhang, S., Hu, C., Wang, L., Mihaljevic, M.J., Xu, S., and Lan, T. (2023). A Malware Detection Approach Based on Deep Learning and Memory Forensics. Symmetry, 15. 3. Sivalingam, K.M. (2021). Applications of Artificial Intelligence, Machine Learning and related Techniques for Computer Networking Systems. arXiv. 4. Goebel, R., Chander, A., Holzinger, K., Lecue, F., Akata, Z., Stumpf, S., Kieseberg, P., and Holzinger, A. (2018, January 27–30). Explainable AI: The new 42?. Proceedings of the Machine Learning and Knowledge Extraction: Second IFIP TC 5, TC 8/WG 8.4, 8.9, TC 12/WG 12.9 International Cross-Domain Conference, CD-MAKE 2018, Hamburg, Germany. Proceedings 2. 5. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI;Arrieta;Inf. Fusion,2020
|
|