End-to-End Depth-Guided Relighting Using Lightweight Deep Learning-Based Method
-
Published:2023-08-28
Issue:9
Volume:9
Page:175
-
ISSN:2313-433X
-
Container-title:Journal of Imaging
-
language:en
-
Short-container-title:J. Imaging
Author:
Nathan Sabari1ORCID, Kansal Priya1ORCID
Affiliation:
1. Couger Inc., Tokyo 150-0001, Japan
Abstract
Image relighting, which involves modifying the lighting conditions while preserving the visual content, is fundamental to computer vision. This study introduced a bi-modal lightweight deep learning model for depth-guided relighting. The model utilizes the Res2Net Squeezed block’s ability to capture long-range dependencies and to enhance feature representation for both the input image and its corresponding depth map. The proposed model adopts an encoder–decoder structure with Res2Net Squeezed blocks integrated at each stage of encoding and decoding. The model was trained and evaluated on the VIDIT dataset, which consists of 300 triplets of images. Each triplet contains the input image, its corresponding depth map, and the relit image under diverse lighting conditions, such as different illuminant angles and color temperatures. The enhanced feature representation and improved information flow within the Res2Net Squeezed blocks enable the model to handle complex lighting variations and generate realistic relit images. The experimental results demonstrated the proposed approach’s effectiveness in relighting accuracy, measured by metrics such as the PSNR, SSIM, and visual quality.
Subject
Electrical and Electronic Engineering,Computer Graphics and Computer-Aided Design,Computer Vision and Pattern Recognition,Radiology, Nuclear Medicine and imaging
Reference52 articles.
1. Puthussery, D., Panikkasseril Sethumadhavan, H., Kuriakose, M., and Charangatt Victor, J. (2020). WDRN: A Wavelet Decomposed Relightnet for Image Relighting. arXiv. 2. Wei, C., Wang, W., Yang, W., and Liu, J. (2018). Deep retinex decomposition for low-light enhancement. arXiv. 3. Lightening network for low-light image enhancement;Wang;IEEE Trans. Image Process.,2020 4. Hu, X., Zhu, L., Fu, C., Qin, J., and Heng, P. (2018, January 18–22). Direction-aware spatial context features for shadow detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA. 5. Le, H., and Samaras, D. (November, January 27). Shadow removal via shadow image decomposition. Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea.
|
|