Abstract
This research introduces a novel approach, termed “explainable federated learning,” designed for privacy-preserving autism prediction in toddlers using deep learning (DL) techniques. The primary objective is to contribute to the development of efficient screening methods for autism spectrum disorder (ASD) while safeguarding individual privacy. The methodology encompasses multiple stages, starting with exploratory data analysis and progressing through machine learning (ML) algorithms, federated learning (FL), and model explainability using local interpretable model-agnostic explanations (LIME). Leveraging non-linear predictive models such as autoencoders, k-nearest neighbors, and multi-layer perceptron, this approach ensures accurate ASD predictions. The FL paradigm facilitates collaboration among multiple clients without centralizing raw data, addressing privacy concerns in medical data sharing. Privacy-preserving strategies, including differential privacy, are integrated to enhance data security. Furthermore, model explainability is achieved through LIME, providing interpretable insights into the prediction process. The experimental results demonstrate significant improvements in predictive accuracy and model interpretability compared to traditional ML approaches. Specifically, our approach achieved an average accuracy increase of 8% across all classifiers tested, demonstrating superior performance in both privacy and predictive metrics over traditional methods. The findings highlight the efficacy of the proposed methodology in advancing ASD screening methodologies in the era of DL applications.
Publisher
King Salman Center for Disability Research