Author:
Park Pilseo,Oh Heungmin,Kim Hyuncheol
Abstract
AbstractThere have been growing trends using deep learning-based approaches for photo retouching which aims to enhance unattractive images and make them visually appealing. However, the existing methods only considered the RGB color space, which limited the available color information for editing. To address this issue, we propose a dual-color space network that extracts color representations from multiple color spaces to provide more robust color information. Our approach is based on the observation that converting an image to a different color space generates a new image that can be further processed by a neural network. Hence, we utilize two separate networks: a transitional network and a base network, each operating in a different color space. Specifically, the input RGB image is converted to another color space (e.g., YCbCr) using color space converter (CSC). The resulting image is then passed through the transitional network to extract color representations from the corresponding color space using color prediction module (CPM). The output of the transitional network is converted back to the RGB space and fed into the base network, which operates in RGB space. By utilizing global priors from each representation in different color spaces, we guide the retouching process to produce natural and realistic results. Experimental results demonstrate that our proposed method outperforms state-of-the-art methods on the MIT-Adobe FiveK dataset, and an in-depth analysis and ablation study highlight the advantages of our approach.
Publisher
Springer Science and Business Media LLC
Reference33 articles.
1. Shariati, S. & Khayatian, G. A new method for selective determination of creatinine using smartphone-based digital image. Microfluid. Nanofluidics 26, 30 (2022).
2. Liu, Y. et al. Remote-sensing estimation of potato above-ground biomass based on spectral and spatial features extracted from high-definition digital camera images. Comput. Electron. Agric. 198, 107089 (2022).
3. Demirhan, M. & Premachandra, C. Development of an automated camera-based drone landing system. IEEE Access 8, 202111–202121 (2020).
4. Gharbi, M., Chen, J., Barron, J. T., Hasinoff, S. W. & Durand, F. Deep bilateral learning for real-time image enhancement. ACM Trans. Graph. (TOG) 36, 1–12 (2017).
5. Hu, Y., He, H., Xu, C., Wang, B. & Lin, S. Exposure: A white-box photo post-processing framework. ACM Trans. Graph. (TOG) 37, 1–17 (2018).