Illumination and Shadows in Head Rotation: Experiments with Denoising Diffusion Models
-
Published:2024-08-05
Issue:15
Volume:13
Page:3091
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Asperti Andrea1ORCID, Colasuonno Gabriele1ORCID, Guerra Antonio1ORCID
Affiliation:
1. Department of Informatics-Science and Engineering (DISI), University of Bologna, 40126 Bologna, Italy
Abstract
Accurately modeling the effects of illumination and shadows during head rotation is critical in computer vision for enhancing image realism and reducing artifacts. This study delves into the latent space of denoising diffusion models to identify compelling trajectories that can express continuous head rotation under varying lighting conditions. A key contribution of our work is the generation of additional labels from the CelebA dataset, categorizing images into three groups based on prevalent illumination direction: left, center, and right. These labels play a crucial role in our approach, enabling more precise manipulations and improved handling of lighting variations. Leveraging a recent embedding technique for Denoising Diffusion Implicit Models (DDIM), our method achieves noteworthy manipulations, encompassing a wide rotation angle of ±30°. while preserving individual distinct characteristics even under challenging illumination conditions. Our methodology involves computing trajectories that approximate clouds of latent representations of dataset samples with different yaw rotations through linear regression. Specific trajectories are obtained by analyzing subsets of data that share significant attributes with the source image, including light direction. Notably, our approach does not require any specific training of the generative model for the task of rotation; we merely compute and follow specific trajectories in the latent space of a pre-trained face generation model. This article showcases the potential of our approach and its current limitations through a qualitative discussion of notable examples. This study contributes to the ongoing advancements in representation learning and the semantic investigation of the latent space of generative models.
Funder
Future AI Research (FAIR) project of the National Recovery and Resilience Plan European Union-NextGenerationEU
Reference64 articles.
1. Denoising Diffusion Probabilistic Models;Ho;Adv. Neural Inf. Process. Syst.,2020 2. Choi, J., Kim, S., Jeong, Y., Gwon, Y., and Yoon, S. (2021, January 10–17). ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models. Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada. 3. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B. (2022, January 18–24). High-resolution image synthesis with latent diffusion models. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA. 4. Diffusion Models Beat GANs on Image Synthesis;Dhariwal;Adv. Neural Inf. Process. Syst.,2021 5. Eschweiler, D., Yilmaz, R., Baumann, M., Laube, I., Roy, R., Jose, A., Brückner, D., and Stegmaier, J. (2024). Denoising diffusion probabilistic models for generation of realistic fully-annotated microscopy image datasets. PLoS Comput. Biol., 20.
|
|