A Variational Autoencoder Cascade Generative Adversarial Network for Scalable 3D Object Generation and Reconstruction
Author:
Yu Min-Su1, Jung Tae-Won2ORCID, Yun Dai-Yeol3, Hwang Chi-Gon3ORCID, Park Sea-Young2, Kwon Soon-Chul1ORCID, Jung Kye-Dong4
Affiliation:
1. Department of Smart Convergence, Kwangwoon University, Seoul 01897, Republic of Korea 2. Department of Immersive Content Convergence, Kwangwoon University, Seoul 01897, Republic of Korea 3. Institute of Information Technology, Kwangwoon University, Seoul 01897, Republic of Korea 4. Ingenium College of Liberal Arts, Kwangwoon University, Seoul 01897, Republic of Korea
Abstract
Generative Adversarial Networks (GANs) for 3D volume generation and reconstruction, such as shape generation, visualization, automated design, real-time simulation, and research applications, are receiving increased amounts of attention in various fields. However, challenges such as limited training data, high computational costs, and mode collapse issues persist. We propose combining a Variational Autoencoder (VAE) and a GAN to uncover enhanced 3D structures and introduce a stable and scalable progressive growth approach for generating and reconstructing intricate voxel-based 3D shapes. The cascade-structured network involves a generator and discriminator, starting with small voxel sizes and incrementally adding layers, while subsequently supervising the discriminator with ground-truth labels in each newly added layer to model a broader voxel space. Our method enhances the convergence speed and improves the quality of the generated 3D models through stable growth, thereby facilitating an accurate representation of intricate voxel-level details. Through comparative experiments with existing methods, we demonstrate the effectiveness of our approach in evaluating voxel quality, variations, and diversity. The generated models exhibit improved accuracy in 3D evaluation metrics and visual quality, making them valuable across various fields, including virtual reality, the metaverse, and gaming.
Reference44 articles.
1. Microsoft kinect sensor and its effect;Zhang;IEEE MultiMedia,2012 2. Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T., and Nießner, M. (2017, January 21–26). Scannet: Richly-annotated 3d reconstructions of indoor scenes. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA. 3. Unsupervised feature learning and deep learning: A review and new perspectives;Bengio;CoRR,2012 4. Kingma, D.P., and Welling, M. (2014). Auto-encoding variational bayes. arXiv. 5. Generative Adversarial Networks;Goodfellow;Commun. ACM,2014
|
|