Affiliation:
1. College of Music and Dance, JiNing Normal Unisersity, JiNing, Inner Mongolia 012000, China
2. Philippine Christian University, Manila, Philippines
Abstract
Aiming at the problems of music emotion classification, a music emotion recognition method based on the convolutional neural network is proposed. First, the mel-frequency cepstral coefficient (MFCC) and residual phase (RP) are weighted and combined to extract the audio low-level features of music, so as to improve the efficiency of data mining. Then, the spectrogram is input into the convolutional recurrent neural network (CRNN) to extract the time-domain features, frequency-domain features, and sequence features of audio. At the same time, the low-level features of audio are input into the bidirectional long short-term memory (Bi-LSTM) network to further obtain the sequence information of audio features. Finally, the two parts of features are fused and input into the softmax classification function with the center loss function to achieve the recognition of four music emotions. The experimental results based on the emotion music dataset show that the recognition accuracy of the proposed method is 92.06%, and the value of the loss function is about 0.98, both of which are better than other methods. The proposed method provides a new feasible idea for the development of music emotion recognition.
Subject
General Mathematics,General Medicine,General Neuroscience,General Computer Science
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Exploring Machine Learning Techniques for Music Emotion Classification: A Comprehensive Review;2024 11th International Conference on Computing for Sustainable Global Development (INDIACom);2024-02-28
2. Machine learning music emotion recognition based on audio features;2023 IEEE 6th International Conference on Information Systems and Computer Aided Education (ICISCAE);2023-09-23
3. MERP: A Music Dataset with Emotion Ratings and Raters’ Profile Information;Sensors;2022-12-29