EMPT: a sparsity Transformer for EEG-based motor imagery recognition

Author:

Liu Ming,Liu Yanbing,Shi Weiyou,Lou Yitai,Sun Yuan,Meng Qi,Wang Dezheng,Xu Fangzhou,Zhang Yang,Zhang Lei,Leng Jiancai

Abstract

IntroductionTransformer network is widely emphasized and studied relying on its excellent performance. The self-attention mechanism finds a good solution for feature coding among multiple channels of electroencephalography (EEG) signals. However, using the self-attention mechanism to construct models on EEG data suffers from the problem of the large amount of data required and the complexity of the algorithm.MethodsWe propose a Transformer neural network combined with the addition of Mixture of Experts (MoE) layer and ProbSparse Self-attention mechanism for decoding the time-frequency-spatial domain features from motor imagery (MI) EEG of spinal cord injury patients. The model is named as EEG MoE-Prob-Transformer (EMPT). The common spatial pattern and the modified s-transform method are employed for achieving the time-frequency-spatial features, which are used as feature embeddings to input the improved transformer neural network for feature reconstruction, and then rely on the expert model in the MoE layer for sparsity mapping, and finally output the results through the fully connected layer.ResultsEMPT achieves an accuracy of 95.24% on the MI EEG dataset for patients with spinal cord injury. EMPT has also achieved excellent results in comparative experiments with other state-of-the-art methods.DiscussionThe MoE layer and ProbSparse Self-attention inside the EMPT are subjected to visualisation experiments. The experiments prove that sparsity can be introduced to the Transformer neural network by introducing MoE and kullback-leibler divergence attention pooling mechanism, thereby enhancing its applicability on EEG datasets. A novel deep learning approach is presented for decoding EEG data based on MI.

Publisher

Frontiers Media SA

Reference50 articles.

1. ETC: Encoding long and structured inputs in transformers;Ainslie;Proceedings of the 2020 Conference on empirical methods in natural language processing (EMNLP),2020

2. Comparison of the effectiveness of AICA-WT technique in discriminating vascular dementia EEGS;Al-Qazzaz;Proceedings of the 2018 2nd international conference on biosignal analysis, processing and systems,2018

3. EEG wavelet spectral analysis during a working memory tasks in stroke-related mild cognitive impairment patients;Al-Qazzaz;Proceedings of the 2016 international federation for medical and biological engineering,2015

4. Physics-informed attention temporal convolutional network for EEG-based motor imagery classification.;Altaheri;IEEE Trans. Ind. Inform.,2023

5. Attention-inception and long- short-term memory-based electroencephalography classification for motor imagery tasks in rehabilitation.;Amin;IEEE Trans. Ind. Inform.,2022

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3