Affiliation:
1. College of Computer Science and Technology, Xinjiang University, Urumqi 830017, China
2. Xinjiang Laboratory of Multi-Language Information Technology, Xinjiang University, Urumqi 830017, China
3. Xinjiang Multilingual Information Technology Research Center, Xinjiang University, Urumqi 830017, China
Abstract
Current research on scene text recognition primarily focuses on languages with abundant linguistic resources, such as English and Chinese. In contrast, there is relatively limited research dedicated to low-resource languages. Advanced methods for scene text recognition often employ Transformer-based architectures. However, the performance of Transformer architectures is suboptimal when dealing with low-resource datasets. This paper proposes a Collaborative Encoding Method for Scene Text Recognition in the low-resource Uyghur language. The encoding framework comprises three main modules: the Filter module, the Dual-Branch Feature Extraction module, and the Dynamic Fusion module. The Filter module, consisting of a series of upsampling and downsampling operations, performs coarse-grained filtering on input images to reduce the impact of scene noise on the model, thereby obtaining more accurate feature information. The Dual-Branch Feature Extraction module adopts a parallel structure combining Transformer encoding and Convolutional Neural Network (CNN) encoding to capture local and global information. The Dynamic Fusion module employs an attention mechanism to dynamically merge the feature information obtained from the Transformer and CNN branches. To address the scarcity of real data for natural scene Uyghur text recognition, this paper conducted two rounds of data augmentation on a dataset of 7267 real images, resulting in 254,345 and 3,052,140 scene images, respectively. This process partially mitigated the issue of insufficient Uyghur language data, making low-resource scene text recognition research feasible. Experimental results demonstrate that the proposed collaborative encoding approach achieves outstanding performance. Compared to baseline methods, our collaborative encoding approach improves accuracy by 14.1%.
Funder
Joint Funds of the National Natural Science Foundation of China Joint Fund Project
Shenzhen Municipal Science and Technology Innovation Committee Project
Reference56 articles.
1. Baek, J., Kim, G., Lee, J., Park, S., Han, D., Yun, S., Oh, S.J., and Lee, H. (November, January 27). What is wrong with scene text recognition model comparisons? dataset and model analysis. Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea.
2. Scene text detection and recognition with advances in deep learning: A survey;Liu;Int. J. Doc. Anal. Recognit.,2019
3. Scene text detection and recognition: The deep learning era;Long;Int. J. Comput. Vis.,2021
4. Text detection and recognition in imagery: A survey;Ye;IEEE Trans. Pattern Anal. Mach. Intell.,2014
5. Scene text detection and recognition: Recent advances and future trends;Zhu;Front. Comput. Sci.,2016