Exploring Style-Robust Scene Text Detection via Style-Aware Learning
-
Published:2024-01-05
Issue:2
Volume:13
Page:243
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Cai Yuanqiang12, Zhou Fenfen34, Yin Ronghui5
Affiliation:
1. State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China 2. School of Computer Science, Beijing University of Posts and Telecommunications, Beijing 100876, China 3. Beijing Key Laboratory of Information Service Engineering, Beijing Union University, Beijing 100101, China 4. College of Robotics, Beijing Union University, Beijing 100027, China 5. Guangdong Science & Technology Infrastructure Center, Guangzhou 510033, China
Abstract
Although current scene text detectors achieve remarkable accuracy across different and diverse styles of datasets by fine-tuning models multiple times, these approaches are time-consuming and hinder model generalization. As such, exploring a training strategy that only requires training once on all datasets is a promising solution. However, the text-style mismatch poses challenges to accuracy in such an approach. To mitigate these issues, we propose a style-aware learning network (SLNText) for style-robust text detection in the wild. This includes a style-aware head to distinguish the text styles of images and a dynamic selection head to realize the detection of images with different text styles. SLNText is only trained once, achieving superior performance by automatically learning from multiple text styles and overcoming the style mismatch issue inherent in one-size-fits-all approaches. By using only one set of network parameters, our method significantly reduces the training consumption while maintaining a satisfactory performance on several styles of datasets. Our extensive experiments demonstrate that SLNText achieves satisfactory performance in several styles of datasets, showcasing its effectiveness and efficiency as a promising solution to style-robust scene text detection.
Funder
Beijing University of Posts and Telecommunications Basic Research Fund State Key Laboratory of Networking and Switching Technology Beijing Natural Science Foundation
Reference41 articles.
1. Karatzas, D., Shafait, F., Uchida, S., Iwamura, M., Bigorda, L.G.I., Mestre, S.R., Mas, J., Mota, D.F., Almazàn, J.A., and Heras, L.P.D. (2013, January 25–28). ICDAR 2013 robust reading competition. Proceedings of the International Conference on Document Analysis and Recognition, Washington, DC, USA. 2. Karatzas, D., Gomez-Bigorda, L., Nicolaou, A., Ghosh, S., Bagdanov, A., Iwamura, M., Matas, J., Neumann, L., Chandrasekhar, V.R., and Lu, S. (2015, January 23–26). ICDAR 2015 competition on robust reading. Proceedings of the 2015 13th International Conference on Document Analysis and Recognition (ICDAR), Tunis, Tunisia. 3. Yao, C., Bai, X., Liu, W., Ma, Y., and Tu, Z. (2012, January 16–21). Detecting texts of arbitrary orientations in natural images. Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA. 4. Yuliang, L., Lianwen, J., Shuaitao, Z., and Sheng, Z. (2017). Detecting curve text in the wild: New dataset and new solution. arXiv. 5. Veit, A., Matera, T., Neumann, L., Matas, J., and Belongie, S. (2016). Coco-text: Dataset and benchmark for text detection and recognition in natural images. arXiv.
|
|