Abstract
Successful applications of brain-computer interface (BCI) approaches to motor imagery (MI) are still limited. In this paper, we propose a classification framework for MI electroencephalogram (EEG) signals that combines a convolutional neural network (CNN) architecture with a variational autoencoder (VAE) for classification. The decoder of the VAE generates a Gaussian distribution, so it can be used to fit the Gaussian distribution of EEG signals. A new representation of input was developed by combining the time, frequency, and channel information from the EEG signal, and the CNN-VAE method was designed and optimized accordingly for this form of input. In this network, the classification of the extracted CNN features is performed via the deep network VAE. Our framework, with an average kappa value of 0.564, outperforms the best classification method in the literature for BCI Competition IV dataset 2b with a 3% improvement. Furthermore, using our own dataset, the CNN-VAE framework also yields the best performance for both three-electrode and five-electrode EEGs and achieves the best average kappa values 0.568 and 0.603, respectively. Our results show that the proposed CNN-VAE method raises performance to the current state of the art.
Funder
National Natural Science Foundation of China
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Cited by
166 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献