Author:
Asadzadeh Shiva,Yousefi Rezaii Tohid,Beheshti Soosan,Meshgini Saeed
Abstract
AbstractDue to the effect of emotions on interactions, interpretations, and decisions, automatic detection and analysis of human emotions based on EEG signals has an important role in the treatment of psychiatric diseases. However, the low spatial resolution of EEG recorders poses a challenge. In order to overcome this problem, in this paper we model each emotion by mapping from scalp sensors to brain sources using Bernoulli–Laplace-based Bayesian model. The standard low-resolution electromagnetic tomography (sLORETA) method is used to initialize the source signals in this algorithm. Finally, a dynamic graph convolutional neural network (DGCNN) is used to classify emotional EEG in which the sources of the proposed localization model are considered as the underlying graph nodes. In the proposed method, the relationships between the EEG source signals are encoded in the DGCNN adjacency matrix. Experiments on our EEG dataset recorded at the Brain-Computer Interface Research Laboratory, University of Tabriz as well as publicly available SEED and DEAP datasets show that brain source modeling by the proposed algorithm significantly improves the accuracy of emotion recognition, such that it achieve a classification accuracy of 99.25% during the classification of the two classes of positive and negative emotions. These results represent an absolute 1–2% improvement in terms of classification accuracy over subject-dependent and subject-independent scenarios over the existing approaches.
Publisher
Springer Science and Business Media LLC
Reference46 articles.
1. Marg, E. DESCARTES’ERROR: Emotion, reason, and the human brain. Optom. Vis. Sci. 72, 847–848 (1995).
2. Marrero-Fernández, P., Montoya-Padrón, A., Jaume-i-Capó, A. & Buades Rubio, J. M. Evaluating the research in automatic emotion recognition. IETE Tech. Rev. 31, 220–232 (2014).
3. Darwin, C. & Prodger, P. The Expression of the Emotions in Man and Animals (Oxford University Press, 1998).
4. Tian, Y.-I., Kanade, T. & Cohn, J. F. Recognizing action units for facial expression analysis. IEEE Trans. Pattern Anal. Mach. Intell. 23, 97–115 (2001).
5. Liu, Y., Sourina, O. & Nguyen, M. K. Transactions on Computational Science XII 256–277 (Springer, 2011).
Cited by
17 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献