Music Emotion Recognition Model Using Gated Recurrent Unit Networks and Multi-Feature Extraction

Author:

Niu Nana1ORCID

Affiliation:

1. Henan Finance University, Zhengzhou 450046, China

Abstract

A large number of music platforms have appeared on the Internet recently. The deep learning framework for music recommendation is still very limited when it comes to accurately identifying the emotional type of music and recommending it to users. Languages, musical styles, thematic scenes, and the ages to which they belong are all common classifications. And this is far from sufficient, posing difficulties in music classification and identification. As a result, this paper uses the methods of music emotion multi-feature extraction, BiGRU model design, and music theme scene classification model design to improve the accuracy of music emotion recognition. It develops the BiGRU emotion recognition model and compares it to other models. BiGRU can correctly identify happy and sad emotion music up to 79 percent and 81.01 percent of the time, respectively. It goes far beyond Rnet-LSTM, and the greater the difference in emotion categories, the easier it is to analyze the feature sequence containing emotional features and the higher the recognition accuracy. This is especially evident in the accuracy with which happiness and sadness are recognized. It can meet users' needs for music recognition in a variety of settings.

Publisher

Hindawi Limited

Subject

Computer Networks and Communications,Computer Science Applications

Reference24 articles.

1. Music emotion recognition using convolutional long short term memory deep neural networks[J];S. Hizlisoy;Engineering Science and Technology an International Journal,2020

2. A New Recognition Method for Visualizing Music Emotion

3. Bidirectional Convolutional Recurrent Sparse Network (BCRSN): An Efficient Model for Music Emotion Recognition

4. Using psychophysiological measures to recognize personal music emotional experience

5. A study on the variation of music characteristics based on user controlled music emotion[J];L. Van;JOURNAL OF THE KOREA CONTENTS ASSOCIATION,2017

Cited by 3 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Optimized recurrent neural network based brain emotion recognition technique;Multimedia Tools and Applications;2024-03-25

2. A Bimodal-based Algorithm for Song Sentiment Classification;2024 4th International Conference on Neural Networks, Information and Communication (NNICE);2024-01-19

3. Polyphonic Instrument Emotion Recognition using Stacked Auto Encoders: A Dimensionality Reduction Approach;Procedia Computer Science;2023

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3