Abstract
The use of electroencephalography to recognize human emotions is a key technology for advancing human–computer interactions. This study proposes an improved deep convolutional neural network model for emotion classification using a non-end-to-end training method that combines bottom-, middle-, and top-layer convolution features. Four sets of experiments using 4500 samples were conducted to verify model performance. Simultaneously, feature visualization technology was used to extract the three-layer features obtained by the model, and a scatterplot analysis was performed. The proposed model achieved a very high accuracy of 93.7%, and the extracted features exhibited the best separability among the tested models. We found that adding redundant layers did not improve model performance, and removing the data of specific channels did not significantly reduce the classification effect of the model. These results indicate that the proposed model allows for emotion recognition with a higher accuracy and speed than the previously reported models. We believe that our approach can be implemented in various applications that require the quick and accurate identification of human emotions.
Funder
National Natural Science Foundation of China
Zhejiang Provincial Key Research and Development Program of China
National Key R&D Program of China
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献