Affiliation:
1. Sri Eshwar College of Engineering
Abstract
Abstract
Scene text detection is challenging due to variations in text appearance, backgrounds, and orientations. It is necessary to improve robustness, accuracy, and efficiency for applications like OCR, image understanding, and autonomous vehicles. The combination of Generative Adversarial Network (GAN) and Network Variational Autoencoder (VAE) has the potential to create a more robust and powerful text detection network. The proposed network comprises three modules: the VAE module, the GAN module, and the text detection module. The VAE module generates diverse and variable text regions, while the GAN module refines and enhances these regions to make them more realistic and accurate. The text detection module is responsible for detecting text regions in the input image and assigning a confidence score to each region. During training, the entire network is trained end-to-end to minimize a joint loss function, which includes the VAE loss, the GAN loss, and the text detection loss. The VAE loss ensures that the generated text regions are diverse and variable, while the GAN loss ensures that the generated text regions are realistic and accurate. The text detection loss guarantees that the network can detect text regions in the input image with high accuracy. The proposed method employs an encoder-decoder structure in the VAE module and a generator-discriminator structure in the GAN module. The generated text regions are refined and enhanced by the GAN module to produce more accurate results. The text detection module then identifies the text regions with high confidence scores. The proposed network is tested on several datasets, including Total-Text, CTW1500, ICDAR 2015, ICDAR 2017, ReCTS, TD500, COCO-Text, SynthText, Street View Text, and KIAST Scene Text and achieved promising results.
Publisher
Research Square Platform LLC
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献