Affiliation:
1. School of Marxism, School of Music and Dance, Henan Normal University, Xinxiang, Henan 453007, China
2. Faculty of Education, Henan Normal University, Xinxiang, Henan 453007, China
Abstract
This work intends to classify and integrate music genres and emotions to improve the quality of music education. This work proposes a web image education resource retrieval method based on semantic network and interactive image filtering for a music education environment. It makes a judgment on these music source data and then uses these extracted feature sequences as the emotions expressed in the model of the combination of Long Short-Term Memory (LSTM) and Attention Mechanism (AM), thus judging the emotion category of music. The emotion recognition accuracy has increased after improving LSTM-AM into the BiGR-AM model. The greater the difference between emotion genres is, the easier it is to analyze the feature sequence containing emotion features, and the higher the recognition accuracy is. The classification accuracy of the excited, relieved, relaxed, and sad emotions can reach 76.5%, 71.3%, 80.8%, and 73.4%, respectively. The proposed interactive filtering method based on a Convolutional Recurrent Neural Network can effectively classify and integrate music resources to improve the quality of music education.
Funder
Chinese National Funding of Social Sciences
Subject
General Mathematics,General Medicine,General Neuroscience,General Computer Science
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献