Author:
Cui Gaochao,Li Xueyuan,Touyama Hideaki
Abstract
AbstractElectroencephalography (EEG)-based emotion recognition is an important technology for human–computer interactions. In the field of neuromarketing, emotion recognition based on group EEG can be used to analyze the emotional states of multiple users. Previous emotion recognition experiments have been based on individual EEGs; therefore, it is difficult to use them for estimating the emotional states of multiple users. The purpose of this study is to find a data processing method that can improve the efficiency of emotion recognition. In this study, the DEAP dataset was used, which comprises EEG signals of 32 participants that were recorded as they watched 40 videos with different emotional themes. This study compared emotion recognition accuracy based on individual and group EEGs using the proposed convolutional neural network model. Based on this study, we can see that the differences of phase locking value (PLV) exist in different EEG frequency bands when subjects are in different emotional states. The results showed that an emotion recognition accuracy of up to 85% can be obtained for group EEG data by using the proposed model. It means that using group EEG data can effectively improve the efficiency of emotion recognition. Moreover, the significant emotion recognition accuracy for multiple users achieved in this study can contribute to research on handling group human emotional states.
Publisher
Springer Science and Business Media LLC
Reference25 articles.
1. Zhang, J., Yin, Z., Chen, P. & Nichele, S. Emotion recognition using multi-modal data and machine learning techniques: A tutorial and review. Inf. Fusion 59, 103–126 (2020).
2. Ayata, D., Yaslan, Y. & Kamasak, M. E. Emotion based music recommendation system using wearable physiological sensors. IEEE Trans. Consum. Electron. 64, 196–203 (2018).
3. Song, Y., Dixon, S. & Pearce, M. A survey of music recommendation systems and future perspectives. In 9th International Symposium on Computer Music Modeling and Retrieval, Vol. 4, 395–410 (2012).
4. Lin, J.-C., Wu, C.-H. & Wei, W.-L. Error weighted semi-coupled hidden Markov model for audio-visual emotion recognition. IEEE Trans. Multimed. 14, 142–156 (2011).
5. Venkataramanan, K. & Rajamohan, H. R. Emotion recognition from speech. arXiv preprint arXiv:1912.10458 (2019).
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献