Abstract
Abstract
We investigate whether conditional generative adversarial networks (C-GANs) are suitable for point cloud rendering. For this purpose, we created a dataset containing approximately 150,000 renderings of point cloud–image pairs. The dataset was recorded using our mobile mapping system, with capture dates that spread across 1 year. Our model learns how to predict realistically looking images from just point cloud data. We show that we can use this approach to colourize point clouds without the usage of any camera images. Additionally, we show that by parameterizing the recording date, we are even able to predict realistically looking views for different seasons, from identical input point clouds.
Funder
Deutsche Forschungsgemeinschaft
Publisher
Springer Science and Business Media LLC
Subject
Earth and Planetary Sciences (miscellaneous),Instrumentation,Geography, Planning and Development
Reference35 articles.
1. Atienza R (2019) A conditional generative adversarial network for rendering point clouds. In: IEEE conference on computer vision and pattern recognition workshops, CVPR workshops 2019, Long Beach, CA, USA. Computer Vision Foundation/IEEE, pp 10–17
2. Bansal A, Sheikh Y, Ramanan D (2017) Pixelnn: example-based image synthesis. arXiv preprint
arXiv:1708.05349
3. Borji A (2019) Pros and cons of gan evaluation measures. Comput Vis Image Underst 179:41–65
4. Bouknight WJ (1970) A procedure for generation of three-dimensional half-toned computer graphics presentations. Commun ACM 13(9):527–536
5. Boulch A, Guerry J, Le Saux B, Audebert N (2018) SnapNet: 3D point cloud semantic labeling with 2D deep segmentation networks. Comput Graph 71:189–198.
https://doi.org/10.1016/j.cag.2017.11.010
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献