Triangle-Mesh-Rasterization-Projection (TMRP): An Algorithm to Project a Point Cloud onto a Consistent, Dense and Accurate 2D Raster Image
Author:
Junger Christina1ORCID, Buch Benjamin1ORCID, Notni Gunther12ORCID
Affiliation:
1. Group for Quality Assurance and Industrial Image Processing, Technische Universität Ilmenau, 98693 Ilmenau, Germany 2. Fraunhofer Institute for Applied Optics and Precision Engineering IOF Jena, 07745 Jena, Germany
Abstract
The projection of a point cloud onto a 2D camera image is relevant in the case of various image analysis and enhancement tasks, e.g., (i) in multimodal image processing for data fusion, (ii) in robotic applications and in scene analysis, and (iii) for deep neural networks to generate real datasets with ground truth. The challenges of the current single-shot projection methods, such as simple state-of-the-art projection, conventional, polygon, and deep learning-based upsampling methods or closed source SDK functions of low-cost depth cameras, have been identified. We developed a new way to project point clouds onto a dense, accurate 2D raster image, called Triangle-Mesh-Rasterization-Projection (TMRP). The only gaps that the 2D image still contains with our method are valid gaps that result from the physical limits of the capturing cameras. Dense accuracy is achieved by simultaneously using the 2D neighborhood information (rx,ry) of the 3D coordinates in addition to the points P(X,Y,V). In this way, a fast triangulation interpolation can be performed. The interpolation weights are determined using sub-triangles. Compared to single-shot methods, our algorithm is able to solve the following challenges. This means that: (1) no false gaps or false neighborhoods are generated, (2) the density is XYZ independent, and (3) ambiguities are eliminated. Our TMRP method is also open source, freely available on GitHub, and can be applied to almost any sensor or modality. We also demonstrate the usefulness of our method with four use cases by using the KITTI-2012 dataset or sensors with different modalities. Our goal is to improve recognition tasks and processing optimization in the perception of transparent objects for robotic manufacturing processes.
Funder
Carl-Zeiss-Stiftung
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference64 articles.
1. Wu, Z., Su, S., Chen, Q., and Fan, R. (June, January 29). Transparent Objects: A Corner Case in Stereo Matching. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2023), London, UK. 2. Jiang, J., Cao, G., Deng, J., Do, T.T., and Luo, S. (2023). Robotic Perception of Transparent Objects: A Review. arXiv. 3. You, J., and Kim, Y.K. (2023). Up-Sampling Method for Low-Resolution LiDAR Point Cloud to Enhance 3D Object Detection in an Autonomous Driving Environment. Sensors, 23. 4. Li, Y., Xue, T., Sun, L., and Liu, J. (2012, January 9–13). Joint Example-Based Depth Map Super-Resolution. Proceedings of the 2012 IEEE International Conference on Multimedia and Expo, Melbourne, VIC, Australia. 5. Yang, Q., Yang, R., Davis, J., and Nister, D. (2007, January 17–22). Spatial-Depth Super Resolution for Range Images. Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|