Abstract
Abstract
In this era, Explainable Artificial Intelligence (XAI) is being employed in many health-related problems, but it faces challenges because most models produce results that are opaque and interpretable. The goal of explainable AI is to make machine learning, and deep learning models more understandable and accessible to people. Consequently, there is a pressing need for XAI models to enhance trust, given its increasing popularity in the field of medical artificial intelligence. This study explores the XAI nature of machine learning for disease prediction, with a particular focus on transparency and reliability of the results. The study examines the interpretability of artificial intelligence, focusing on issues such as bias, equality, and system reliability. The main theme is to minimize errors, disparities in human understanding, and use artificial intelligence in disease prediction to improve the outcomes for medical patients. The XAI methods were validated on Sclerosis predictions using two important models with fine-tuning their hyperparameters. The experiments demonstrated that the XAI methods outperformed the existing methods, achieving impressive results in terms of accuracy, recall, f1 score, precision, and AUC. The proposed approach achieved 98.53% accuracy using 75%–25% hold-out splitting, and 98.14% accuracy using 10-fold validation. This semantic approach is superior to previous methods by showing the abundance of correct predictions and demonstrating its effectiveness in predicting multiple sclerosis in the real world.