Affiliation:
1. School of Aviation Services and Music, Nanchang Hangkong University, Nanchang 330063, Jiangxi, China
2. School of Music, Jiangxi Normal University, Nanchang 330067, Jiangxi, China
Abstract
In recent years, the explosive growth of online music resources makes it difficult to retrieve and manage music information. To efficiently retrieve and classify music information has become a hot research topic. Thayer’s two-dimensional emotion plane is selected as the basis for establishing the music emotion database. Music is divided into five categories, the concept of continuous emotion perception is introduced, and music emotion is regarded as a point on a two-dimensional emotional plane, together with the two sentiment variables to determine its location. The artificial labeling method is used to determine the position range of the five types of emotions on the emotional plane, and the regression method is used to obtain the relationship between the VA value and the music features so that the music emotion classification problem is transformed into a regression problem. A regression-based music emotion classification system is designed and implemented, which mainly includes a training part and a testing part. In the training part, three algorithms, namely, polynomial regression, support vector regression, and k-plane piecewise regression, are used to obtain the regression model. In the test part, the input music data is regressed and predicted to obtain its VA value and then classified, and the system performance is considered by classification accuracy. Results show that the combined method of support vector regression and k-plane piecewise regression improves the accuracy by 3 to 4 percentage points compared to using one algorithm alone; compared with the traditional classification method based on a support vector machine, the accuracy improves by 6 percentage points. Music emotion is classified by algorithms such as support vector machine classification, K-neighborhood classification, fuzzy neural network classification, fuzzy K-neighborhood classification, Bayesian classification, and Fisher linear discrimination, among which the support vector machine, fuzzy K-neighborhood, and the accuracy rate of music emotion classification realized by Fisher linear discriminant algorithm are more than 80%; a new algorithm “mixed classifier” is proposed, and the music emotion recognition rate based on this algorithm reaches 84.9%.
Funder
Humanities and Social Science Fund of Ministry of Education of China
Subject
General Engineering,General Mathematics
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Assessment of Human Emotional Responses to AI–Composed Music: A Systematic Literature Review;2024 International Research Conference on Smart Computing and Systems Engineering (SCSE);2024-04-04
2. Music Emotion Classification using Harris Hawk Optimization based LightGBM Classifier;2024 Tenth International Conference on Bio Signals, Images, and Instrumentation (ICBSII);2024-03-20
3. A Bimodal-based Algorithm for Song Sentiment Classification;2024 4th International Conference on Neural Networks, Information and Communication (NNICE);2024-01-19
4. Emotional Behavior Analysis of Music Course Evaluation Based on Online Comment Mining;International Journal of Information Technology and Web Engineering;2024-01-17
5. Feature Aggregation with Two-Layer Ensemble Framework for Multilingual Speech Emotion Recognition;Mathematical Problems in Engineering;2023-12-11