Affiliation:
1. INRIA, France
2. Queen Mary University of London, United Kingdom
3. University of Edinburgh, United Kingdom
Abstract
Most sound scenes result from the superposition of several sources, which can be separately perceived and analyzed by human listeners. Source separation aims to provide machine listeners with similar skills by extracting the sounds of individual sources from a given scene. Existing separation systems operate either by emulating the human auditory system or by inferring the parameters of probabilistic sound models. In this chapter, the authors focus on the latter approach and provide a joint overview of established and recent models, including independent component analysis, local time-frequency models and spectral template-based models. They show that most models are instances of one of the following two general paradigms: linear modeling or variance modeling. They compare the merits of either paradigm and report objective performance figures. They also,conclude by discussing promising combinations of probabilistic priors and inference algorithms that could form the basis of future state-of-the-art systems.
Cited by
15 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Speaker Counting by Scattered Microphone Array Based on DOA and Eigenvalue Estimations in Adverse Environments;2023 9th International Conference on Signal Processing and Communication (ICSC);2023-12-21
2. Time-Domain Audio Source Separation Based on Gaussian Processes with Deep Kernel Learning;2023 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA);2023-10-22
3. Blind Source Counting and Separation with Relative Harmonic Coefficients;ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2023-06-04
4. Audio Source Count Estimation using Deep Learning;2022 International Conference on Signal and Information Processing (IConSIP);2022-08-26
5. Unsupervised Speech Enhancement Using Dynamical Variational Autoencoders;IEEE/ACM Transactions on Audio, Speech, and Language Processing;2022