Abstract
Today, identifying and preventing spam has become a challenge, particularly with the abundance of text-based content in emails, social media platforms, and websites. Although traditional spam filters are somewhat effective, they often struggle to keep up with new spam methods. The introduction of Machine Learning (ML) and Deep Learning (DL) models has greatly improved the capabilities of spam detection systems. However, the black-box nature of these models poses challenges to user trust due to their lack of transparency. To address this issue, Explainable AI (XAI) has emerged, aiming to make AI decisions more understandable to humans. This study combines XAI with ensemble learning, utilizing multiple learning algorithms to improve performance, and proposes a robust and interpretable system to detect spam effectively. Four classifiers were used for training and testing: Support Vector Machine (SVM), Logistic Regression (LR), Gradient Boost (GB), and Decision Tree (DT). To reduce overfitting, two independent spam email datasets were blended and balanced. The stacking ensemble technique, based on Random Forest (RF), was the best-performing model compared to individual classifiers, having 98% recall, 96% precision, and 97% F1-score. By leveraging XAI's interpretability, the model elucidates the reasoning behind its classifications, leading to the comprehension of hidden patterns associated with spam detection.
Publisher
Engineering, Technology & Applied Science Research