Abstract
AbstractSchizophrenia (SZ) is a neuropsychiatric disorder that affects millions globally. Current diagnosis of SZ is symptom-based, which poses difficulty due to the variability of symptoms across patients. To this end, many recent studies have developed deep learning methods for automated diagnosis of SZ, especially using raw EEG, which provides high temporal precision. For such methods to be productionized, they must be both explainable and robust. Explainable models are essential to identify biomarkers of SZ, and robust models are critical to learn generalizable patterns, especially amidst changes in the implementation environment. One common example is channel loss during EEG recording, which could be detrimental to classifier performance. In this study, we developed a novel channel dropout (CD) approach to increase the robustness of explainable deep learning models trained on EEG data for SZ diagnosis to channel loss. We developed a baseline convolutional neural network (CNN) architecture and implement our approach as a CD layer added to the baseline (CNN-CD). We then applied two explainability approaches to both models for insight into learned spatial and spectral features and show that the application of CD decreases model sensitivity to channel loss. The CNN and CNN-CD achieved accuracies of 81.9% and 80.9% on testing data, respectively. Furthermore, our models heavily prioritized the parietal electrodes and the α-band, which is supported by existing literature. It is our hope that this study motivates the further development of explainable and robust models and bridges the transition from research to application in a clinical decision support role.
Publisher
Cold Spring Harbor Laboratory
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献