Affiliation:
1. College of Music, Huizhou University, Huizhou, Guangdong 516000, China
Abstract
Aiming at the problems of difficult data feature selection and low classification accuracy in music emotion classification, this study proposes a music emotion classification algorithm based on deep belief network (DBN). The traditional DBN network is improved by adding fine-tuning nodes to enhance the adjustability of the model. Then, two music data features, pitch frequency and band energy distribution, are fused as the input of the model. Finally, the support vector machine (SVM) classification algorithm is used as a classifier to realize music emotion classification. The fusion algorithm is tested on real datasets. The results show that the fused feature data of pitch frequency and band energy distribution can effectively represent music emotion. The accuracy of the improved DBN network fused with the SVM classification algorithm for music emotion classification can reach 88.31%, which shows good classification accuracy compared with the existing classification methods.
Subject
Computer Networks and Communications,Computer Science Applications
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Music Emotion Classification using Harris Hawk Optimization based LightGBM Classifier;2024 Tenth International Conference on Bio Signals, Images, and Instrumentation (ICBSII);2024-03-20
2. A Bimodal-based Algorithm for Song Sentiment Classification;2024 4th International Conference on Neural Networks, Information and Communication (NNICE);2024-01-19
3. Machine learning music emotion recognition based on audio features;2023 IEEE 6th International Conference on Information Systems and Computer Aided Education (ICISCAE);2023-09-23
4. Music sentiment classification based on an optimized CNN-RF-QPSO model;Data Technologies and Applications;2023-03-17