Author:
Ravenscroft William,Goetze Stefan,Hain Thomas
Abstract
Separation of speech mixtures in noisy and reverberant environments remains a challenging task for state-of-the-art speech separation systems. Time-domain audio speech separation networks (TasNets) are among the most commonly used network architectures for this task. TasNet models have demonstrated strong performance on typical speech separation baselines where speech is not contaminated with noise. When additive or convolutive noise is present, performance of speech separation degrades significantly. TasNets are typically constructed of an encoder network, a mask estimation network and a decoder network. The design of these networks puts the majority of the onus for enhancing the signal on the mask estimation network when used without any pre-processing of the input data or post processing of the separation network output data. Use of multihead attention (MHA) is proposed in this work as an additional layer in the encoder and decoder to help the separation network attend to encoded features that are relevant to the target speakers and conversely suppress noisy disturbances in the encoded features. As shown in this work, incorporating MHA mechanisms into the encoder network in particular leads to a consistent performance improvement across numerous quality and intelligibility metrics on a variety of acoustic conditions using the WHAMR corpus, a data-set of noisy reverberant speech mixtures. The use of MHA is also investigated in the decoder network where it is demonstrated that smaller performance improvements are consistently gained within specific model configurations. The best performing MHA models yield a mean 0.6 dB scale invariant signal-to-distortion (SISDR) improvement on noisy reverberant mixtures over a baseline 1D convolution encoder. A mean 1 dB SISDR improvement is observed on clean speech mixtures.
Funder
United Kingdom Research and Innovation
Reference44 articles.
1. Neural Machine Translation by Jointly Learning to Align and Translate;Bahdanau,2015
2. Spectrally and Spatially Informed Noise Suppression Using Beamforming and Convolutive NMF;Cauchi,2016
3. Combination of MVDR Beamforming and Single-Channel Spectral Processing for Enhancing Noisy and Reverberant Speech;Cauchi;EURASIP J. Adv. Signal Process.,2015
4. Dual-Path Transformer Network: Direct Context-Aware Modeling for End-To-End Monaural Speech Separation;Chen;Interspeech.,2020
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Combining Conformer and Dual-Path-Transformer Networks for Single Channel Noisy Reverberant Speech Separation;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14
2. On Time Domain Conformer Models for Monaural Speech Separation in Noisy Reverberant Acoustic Environments;2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU);2023-12-16
3. On Data Sampling Strategies for Training Neural Network Speech Separation Models;2023 31st European Signal Processing Conference (EUSIPCO);2023-09-04
4. Perceive and Predict: Self-Supervised Speech Representation Based Loss Functions for Speech Enhancement;ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2023-06-04
5. Deformable Temporal Convolutional Networks for Monaural Noisy Reverberant Speech Separation;ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2023-06-04