Abstract
Singing voice separation on robots faces the problem of interpreting ambiguous auditory signals. The acoustic signal, which the humanoid robot perceives through its onboard microphones, is a mixture of singing voice, music, and noise, with distortion, attenuation, and reverberation. In this paper, we used the 3D Inception-ResUNet structure in the U-shaped encoding and decoding network to improve the utilization of the spatial and spectral information of the spectrogram. Multiobjectives were used to train the model: magnitude consistency loss, phase consistency loss, and magnitude correlation consistency loss. We recorded the singing voice and accompaniment derived from the MIR-1K dataset with NAO robots and synthesized the 10-channel dataset for training the model. The experimental results show that the proposed model trained by multiple objectives reaches an average NSDR of 11.55 dB on the test dataset, which outperforms the comparison model.
Funder
China University Industry, University and Research Innovation Fund
Publisher
Public Library of Science (PLoS)
Reference49 articles.
1. An overview of machine learning and other data-based methods for spatial audio capture, processing, and reproduction;M Cobos;EURASIP Journal on Audio, Speech, and Music Processing,2022
2. A Consolidated Perspective on Multi-Microphone Speech Enhancement and Source Separation;S Gannot;IEEE/ACM Transactions on Audio, Speech, and Language Processing,2017
3. Supervised Speech Separation Based on Deep Learning: An Overview;DL Wang;IEEE/ACM Transactions on Audio, Speech, and Language Processing,2018
4. Singing voice separation with deep U-Net convolutional networks;A Jansson;In Proceedings of the 18th International Society for Music Information Retrieval Conference,2017
5. Stripe-Transformer: deep stripe feature learning for music source separation;J Qian;EURASIP Journal on Audio, Speech, and Music Processing,2023