Author:
Zhang Chengwei,Zhao Haotian
Abstract
Abstract
Lip reading is a widely used technology, which aims to infer text content from visual information. To represent lip information more efficiently and reduce the network parameters, most networks will first extract features from lip images and then classify the features. In recent studies, most researchers adopt convolutional networks to extract information from pixels which contain a lot of useless information, limiting the improvement of model accuracy. In this paper, we designed a graph structures and a lip segmentation network to effectively represent changes in the shape of the lips in adjacent frames and the ROI in local frame and propose two feature extractors, named U-net-based local feature extractor and graph-based adjacent feature extractor. We proposed a very challenging dataset to simulate extreme environments, including highly variable face properties, light intensity and so on. Finally, we designed several different levels of feature fusion methods. The experimental results on the proposed challenging dataset show that the model can effectively extract the useful information from content irrelevant information very well. The accuracy of our proposed model is 9.1% higher than that of baseline. This result shows that our proposed model can better adapt to the application of the wild environment.
Subject
General Physics and Astronomy
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Deep Learning for Visual Speech Analysis: A Survey;IEEE Transactions on Pattern Analysis and Machine Intelligence;2024-09
2. Silent Speech Interface Using Lip-Reading Methods;Communications in Computer and Information Science;2024
3. Deep learning bird song recognition based on MFF-ScSEnet;Ecological Indicators;2023-10
4. Survey on Visual Speech Recognition using Deep Learning Techniques;2023 International Conference on Communication System, Computing and IT Applications (CSCITA);2023-03-31
5. Image Super-Resolution Network Based on Feature Fusion Attention;Journal of Sensors;2022-12-14