Abstract
AbstractLip-reading is an emerging technology in recent years, and it can be applied to the field of language recovery, criminal investigation, identity authentication, etc. We aim to recognize what the speaker is saying without audio but only video. Because of the different mouth shapes and the influence of homophones, the current Mandarin Chinese lip-reading network is proposed, an end-to-end model based on long short-term memory (LSTM) encoder-decoder architecture. The model incorporates the LSTM encoder-decode architecture, the spatiotemporal convolutional neural network (STCNN), Word2Vec, and the Attention model. The STCNN captures continuously encoded motion information, Word2Vec converts words into word vectors for feature encoding, and the Attention model assigns weights to the target words. Based on the video dataset we built, we completed training and testing. Experiments have proved that the accuracy of the Mandarin Chinese lip-reading model is about 72%. Therefore, MCLRN can be used to identify the words spoken by the speaker.
Funder
Joint Fund of the Ministry of Education for Equipment Pre research
National Key Research and Development Program of China
Publisher
Springer Science and Business Media LLC
Subject
Computer Networks and Communications,Computer Science Applications,Signal Processing
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Script Generation for Silent Speech in E-Learning;Advances in Educational Technologies and Instructional Design;2024-06-03
2. Retraction Note: Application of deep learning in Mandarin Chinese lip-reading recognition;EURASIP Journal on Wireless Communications and Networking;2024-05-21
3. AI LipReader-Transcribing Speech from Lip Movements;2024 International Conference on Emerging Smart Computing and Informatics (ESCI);2024-03-05