Affiliation:
1. Department of Computer Science & Engineering Dr. B.R. Ambedkar National Institute of Technology Jalandhar Punjab India
Abstract
SummaryAn intrusion detection system (IDS) is valuable for detecting anomalies and unauthorized access to a system or network. Due to the black‐box nature of these IDS models, network experts need more trust in systems to act on alerts and transparency to understand the model's inner logic. Moreover, biased models' decisions affect the model performance and increase the false positive rates, directly affecting the model's accuracy. So, maintaining Transparency and Fairness simultaneously in IDS models is essential for accurate decision‐making. Existing methods face challenges of the tradeoff between fairness and accuracy, which also affects the reliability and robustness of the model. Motivated by these research gaps, we developed the Fair‐XIDS model. This model clarifies its internal logic with visual explanations and promotes fairness across its entire lifecycle. The Fair‐XIDS model successfully integrates complex transparency and fairness algorithms to address issues like Imbalanced datasets, algorithmic bias, and postprocessing bias with an average 85% reduction in false positive rate. To ensure reliability, the proposed model effectively mitigates the tradeoff between accuracy and fairness with an average of 90% accuracy and more than 85% fairness. The assessment results of the proposed model over diverse datasets and classifiers mark its model‐agnostic nature. Overall, the model achieves more than 85% consistency among diverse classifiers.
Reference60 articles.
1. Explainable artificial intelligence for cybersecurity: a literature survey;Charmet F;Ann Telecommun,2022
2. Machine learning fairness notions: Bridging the gap with real-world applications
3. Fairness in Machine Learning: A Survey
4. CalmonFP WeiD Natesan RamamurthyK VarshneyKRK RamamurthyK VarshneyKRK.Optimized data pre‐processing for discrimination prevention.ArXiv E‐Prints2017:3992‐4001.