Affiliation:
1. The Chinese University of Hong Kong, Hong Kong
2. The University of Hong Kong, Hong Kong
3. Tel-Aviv University, Israel
Abstract
This paper presents a new text-guided technique for generating 3D shapes. The technique leverages a hybrid 3D shape representation, namely EXIM, combining the strengths of explicit and implicit representations. Specifically, the explicit stage controls the topology of the generated 3D shapes and enables local modifications, whereas the implicit stage refines the shape and paints it with plausible colors. Also, the hybrid approach separates the shape and color and generates color conditioned on shape to ensure shape-color consistency. Unlike the existing state-of-the-art methods, we achieve high-fidelity shape generation from natural-language descriptions without the need for time-consuming per-shape optimization or reliance on human-annotated texts during training or test-time optimization. Further, we demonstrate the applicability of our approach to generate indoor scenes with consistent styles using text-induced 3D shapes. Through extensive experiments, we demonstrate the compelling quality of our results and the high coherency of our generated shapes with the input texts, surpassing the performance of existing methods by a significant margin. Codes and models are released at https://github.com/liuzhengzhe/EXIM.
Funder
Research Grants Council of the Hong Kong Special Administrative Region
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Graphics and Computer-Aided Design
Reference56 articles.
1. Kevin Chen Christopher B Choy Manolis Savva Angel X Chang Thomas Funkhouser and Silvio Savarese. 2018. Text2shape: Generating shapes from natural language by learning joint embeddings. In ACCV. Kevin Chen Christopher B Choy Manolis Savva Angel X Chang Thomas Funkhouser and Silvio Savarese. 2018. Text2shape: Generating shapes from natural language by learning joint embeddings. In ACCV.
2. Rui Chen , Yongwei Chen , Ningxin Jiao , and Kui Jia . 2023. Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation. ICCV ( 2023 ). Rui Chen, Yongwei Chen, Ningxin Jiao, and Kui Jia. 2023. Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation. ICCV (2023).
3. Yongwei Chen , Rui Chen , Jiabao Lei , Yabin Zhang , and Kui Jia . 2022 . Tango: Text-driven photorealistic and robust 3D stylization via lighting decomposition. NeurIPS (2022). Yongwei Chen, Rui Chen, Jiabao Lei, Yabin Zhang, and Kui Jia. 2022. Tango: Text-driven photorealistic and robust 3D stylization via lighting decomposition. NeurIPS (2022).
4. Zhiqin Chen Kangxue Yin Matthew Fisher Siddhartha Chaudhuri and Hao Zhang. 2019. BAE-Net: Branched autoencoder for shape co-segmentation. In CVPR. Zhiqin Chen Kangxue Yin Matthew Fisher Siddhartha Chaudhuri and Hao Zhang. 2019. BAE-Net: Branched autoencoder for shape co-segmentation. In CVPR.
5. Zhiqin Chen and Hao Zhang. 2019. Learning implicit fields for generative shape modeling. In CVPR. Zhiqin Chen and Hao Zhang. 2019. Learning implicit fields for generative shape modeling. In CVPR.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献