Synthetic Document Images with Diverse Shadows for Deep Shadow Removal Networks
Author:
Matsuo Yuhi1ORCID, Aoki Yoshimitsu1ORCID
Affiliation:
1. Department of Electrical Engineering, Faculty of Science and Technology, Keio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama 223-8522, Kanagawa, Japan
Abstract
Shadow removal for document images is an essential task for digitized document applications. Recent shadow removal models have been trained on pairs of shadow images and shadow-free images. However, obtaining a large, diverse dataset for document shadow removal takes time and effort. Thus, only small real datasets are available. Graphic renderers have been used to synthesize shadows to create relatively large datasets. However, the limited number of unique documents and the limited lighting environments adversely affect the network performance. This paper presents a large-scale, diverse dataset called the Synthetic Document with Diverse Shadows (SynDocDS) dataset. The SynDocDS comprises rendered images with diverse shadows augmented by a physics-based illumination model, which can be utilized to obtain a more robust and high-performance deep shadow removal network. In this paper, we further propose a Dual Shadow Fusion Network (DSFN). Unlike natural images, document images often have constant background colors requiring a high understanding of global color features for training a deep shadow removal network. The DSFN has a high global color comprehension and understanding of shadow regions and merges shadow attentions and features efficiently. We conduct experiments on three publicly available datasets, the OSR, Kligler’s, and Jung’s datasets, to validate our proposed method’s effectiveness. In comparison to training on existing synthetic datasets, our model training on the SynDocDS dataset achieves an enhancement in the PSNR and SSIM, increasing them from 23.00 dB to 25.70 dB and 0.959 to 0.971 on average. In addition, the experiments demonstrated that our DSFN clearly outperformed other networks across multiple metrics, including the PSNR, the SSIM, and its impact on OCR performance.
Reference50 articles.
1. Bako, S., Darabi, S., Shechtman, E., Wang, J., Sunkavalli, K., and Sen, P. (2016, January 20–24). Removing Shadows from Images of Documents. Proceedings of the Asian Conference on Computer Vision (ACCV 2016), Taipei, Taiwan. 2. Kligler, N., Katz, S., and Tal, A. (2018, January 18–23). Document Enhancement Using Visibility Detection. Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA. 3. Jung, S., Hasan, M.A., and Kim, C. (2018, January 2–6). Water-filling: An efficient algorithm for digitized document shadow removal. Proceedings of the Asian Conference on Computer Vision, Perth, Australia. 4. Wang, B., and Chen, C.L.P. (2019, January 22–25). An Effective Background Estimation Method for Shadows Removal of Document Images. Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan. 5. Wang, B., and Chen, C. (2020). Local Water-Filling Algorithm for Shadow Detection and Removal of Document Images. Sensors, 20.
|
|