Affiliation:
1. College of Electric Power, Inner Mongolia University of Technology
2. Inner Mongolia Academy of Science and Technology
Abstract
Abstract
For the brain-computer interface (BCI) system based on steady-state visual evoked potential (SSVEP), it is difficult to obtain satisfactory classification performance for short-time window SSVEP signals by traditional methods. In this paper, a fused multi-subfrequency bands and convolutional block attention module (CBAM) classification method based on convolutional neural network (CBAM-CNN) is proposed for discerning SSVEP-BCI tasks. This method extracts multi-subfrequency bands SSVEP signals as the initial input of the network model, and then carries out feature fusion on all feature inputs. In addition, CBAM is embedded in both parts of the initial input and feature fusion for adaptive feature refinement. To verify the effectiveness of the proposed method, this study uses the datasets of Inner Mongolia University of Technology (IMUT) and Tsinghua University (THU) to evaluate the performance of the proposed method. The experimental results show that the highest accuracy of CBAM-CNN reaches 98.13%. Within 0.1s-2s time window, the accuracy of CBAM-CNN is 2.01%-16.17%, 2.54%-25.38%, 4.74%-48.85%, 5.40%-49.94%, and 12.76%-53.88% higher than that of CNN, CCA-CWT-SVM, CCA-SVM, CCA-GNB, and CCA, respectively. Especially in the short-time window range of 0.1s-1s, the performance advantage of CBAM-CNN is more significant. The maximum information transmission rate (ITR) of CBAM-CNN is 503.87bit/min, which is 227.53bit/min-503.41bit/min higher than the above five EEG decoding methods. Moreover, CBAM-CNN is 0.39% -16.17% higher than the typical CNN in terms of accuracy, recall, precision, and macro-F1 performance metrics. The study further results show that CBAM-CNN has potential application value in SSVEP decoding.
Publisher
Research Square Platform LLC