Affiliation:
1. College of Music and Dance, Shenzhen University, Shenzhen, Guangdong, China
Abstract
With the development of Internet technology, multimedia information resources are increasing rapidly. Faced with the massive resources in the multimedia music library, it is extremely difficult for people to find the target music that meets their needs. How to realize computer analysis and perceive users’ needs for music resources has become the goal of the future development of human-computer interaction capabilities. Content-based music information retrieval applications are mainly embodied in the automatic classification and recognition of music. Traditional feedforward neural networks are prone to lose local information when extracting singing voice features. For this reason, on the basis of fully considering the impact of information persistence in the network propagation process, this paper proposes an enhanced two-stage super-resolution reconstruction residual network which can effectively integrate the learned features of each layer while increasing the depth of the network. The first stage of reconstruction is to complete the hierarchical learning of singing voice features through dense residual units to improve the integration of information. The second stage of reconstruction is mainly to perform residual relearning on the high-frequency information of the singing voice learned in the first stage to reduce the reconstruction error. In the middle of these two stages, the model introduces feature scaling and expansion convolution to achieve the dual purpose of reducing information redundancy and increasing the receptive field of the convolution kernel. A monophonic singing voice separation based on the high-resolution neural network is proposed. Because the high-resolution network has parallel subnetworks with different resolutions, it also has original resolution representations and multiple low-resolution representations, avoiding information loss caused by serial network downsampling effects and repeating multiple feature fusions to generate new semantic representations, allowing for the learning of comprehensive, high-precision, and highly abstract features. In this article, a high-resolution neural network is utilized to model the time spectrogram in order to correctly estimate the real value of the anticipated time-amplitude spectrograms. Experiments on the dataset MIR-1K show that compared with the current leading SH-4Stack model, the method in this paper has improved SDR, SIR, and SAR indicators for measuring the separation performance, confirming the effectiveness of the algorithm in this paper.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献