Lip2Speech: Lightweight Multi-Speaker Speech Reconstruction with Gabor Features
-
Published:2024-01-17
Issue:2
Volume:14
Page:798
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Dong Zhongping1, Xu Yan1, Abel Andrew2ORCID, Wang Dong3
Affiliation:
1. School of Advanced Technology, Xi’an Jiaotong-Liverpool University, Suzhou 215123, China 2. Computer and Information Sciences, University of Strathclyde, Glasgow G1 1XQ, Scotland, UK 3. Center for Speech and Language Technologies (CSLT), BNRist at Tsinghua University, Beijing 100084, China
Abstract
In environments characterised by noise or the absence of audio signals, visual cues, notably facial and lip movements, serve as valuable substitutes for missing or corrupted speech signals. In these scenarios, speech reconstruction can potentially generate speech from visual data. Recent advancements in this domain have predominantly relied on end-to-end deep learning models, like Convolutional Neural Networks (CNN) or Generative Adversarial Networks (GAN). However, these models are encumbered by their intricate and opaque architectures, coupled with their lack of speaker independence. Consequently, achieving multi-speaker speech reconstruction without supplementary information is challenging. This research introduces an innovative Gabor-based speech reconstruction system tailored for lightweight and efficient multi-speaker speech restoration. Using our Gabor feature extraction technique, we propose two novel models: GaborCNN2Speech and GaborFea2Speech. These models employ a rapid Gabor feature extraction method to derive lowdimensional mouth region features, encompassing filtered Gabor mouth images and low-dimensional Gabor features as visual inputs. An encoded spectrogram serves as the audio target, and a Long Short-Term Memory (LSTM)-based model is harnessed to generate coherent speech output. Through comprehensive experiments conducted on the GRID corpus, our proposed Gabor-based models have showcased superior performance in sentence and vocabulary reconstruction when compared to traditional end-to-end CNN models. These models stand out for their lightweight design and rapid processing capabilities. Notably, the GaborFea2Speech model presented in this study achieves robust multi-speaker speech reconstruction without necessitating supplementary information, thereby marking a significant milestone in the field of speech reconstruction.
Funder
XJTLU Research Development Fund
Reference72 articles.
1. Ephrat, A., and Peleg, S. (2017, January 5–9). Vid2speech: Speech reconstruction from silent video. Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA. 2. Akbari, H., Arora, H., Cao, L., and Mesgarani, N. (2018, January 15–20). Lip2audspec: Speech reconstruction from silent lip movements video. Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada. 3. Abel, A., Gao, C., Smith, L., Watt, R., and Hussain, A. (2018, January 18–21). Fast lip feature extraction using psychologically motivated gabor features. Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI), Bangalore, India. 4. Temporal constraints on the McGurk effect;Munhall;Percept. Psychophys.,1996 5. Generating intelligible audio speech from visual speech;Milner;IEEE/ACM Trans. Audio Speech Lang. Process.,2017
|
|