Abstract
Abstract
We present a unified framework tackling two problems: class-specific 3D reconstruction from a single image, and generation of new 3D shape samples. These tasks have received considerable attention recently; however, most existing approaches rely on 3D supervision, annotation of 2D images with keypoints or poses, and/or training with multiple views of each object instance. Our framework is very general: it can be trained in similar settings to existing approaches, while also supporting weaker supervision. Importantly, it can be trained purely from 2D images, without pose annotations, and with only a single view per instance. We employ meshes as an output representation, instead of voxels used in most prior work. This allows us to reason over lighting parameters and exploit shading information during training, which previous 2D-supervised methods cannot. Thus, our method can learn to generate and reconstruct concave object classes. We evaluate our approach in various settings, showing that: (i) it learns to disentangle shape from pose and lighting; (ii) using shading in the loss improves performance compared to just silhouettes; (iii) when using a standard single white light, our model outperforms state-of-the-art 2D-supervised methods, both with and without pose supervision, thanks to exploiting shading cues; (iv) performance improves further when using multiple coloured lights, even approaching that of state-of-the-art 3D-supervised methods; (v) shapes produced by our model capture smooth surfaces and fine details better than voxel-based approaches; and (vi) our approach supports concave classes such as bathtubs and sofas, which methods based on silhouettes cannot learn.
Funder
Institute of Science and Technology
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Computer Vision and Pattern Recognition,Software
Reference60 articles.
1. Achlioptas, P., Diamanti, O., Mitliagkas, I, & Guibas, L (2018). Learning representations and generative models for 3D point clouds. In International conference on machine learning.
2. Balashova, E., Singh, V., Wang, J., Teixeira, B., Chen, T., & Funkhouser, T. (2018). Structure-aware shape synthesis. In 3DV.
3. Barron, J. T., & Malik, J. (2015). Shape, illumination, and reflectance from shading. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(8), 1670–1687.
4. Broadhurst, A., Drummond, T. W., & Cipolla, R. (2001) A probabilistic framework for space carving. In Proceedings of the international conference on computer vision.
5. Burt, P. J., & Adelson, E. H. (1983). The laplacian pyramid as a compact image code. IEEE Trans on Communications, 31(4), 532–540.
Cited by
86 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献