Depth Prior-Guided 3D Voxel Feature Fusion for 3D Semantic Estimation from Monocular Videos
-
Published:2024-07-05
Issue:13
Volume:12
Page:2114
-
ISSN:2227-7390
-
Container-title:Mathematics
-
language:en
-
Short-container-title:Mathematics
Author:
Wen Mingyun1ORCID, Cho Kyungeun2ORCID
Affiliation:
1. Department of Multimedia Engineering, Dongguk University-Seoul, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Republic of Korea 2. Division of AI Software Convergence, Dongguk University-Seoul, Seoul 04620, Republic of Korea
Abstract
Existing 3D semantic scene reconstruction methods utilize the same set of features extracted from deep learning networks for both 3D semantic estimation and geometry reconstruction, ignoring the differing requirements of semantic segmentation and geometry construction tasks. Additionally, current methods allocate 2D image features to all voxels along camera rays during the back-projection process, without accounting for empty or occluded voxels. To address these issues, we propose separating the features for 3D semantic estimation from those for 3D mesh reconstruction. We use a pretrained vision transformer network for image feature extraction and depth priors estimated by a pretrained multi-view stereo-network to guide the allocation of image features within 3D voxels during the back-projection process. The back-projected image features are aggregated within each 3D voxel via averaging, creating coherent voxel features. The resulting 3D feature volume, composed of unified voxel feature vectors, is fed into a 3D CNN with a semantic classification head to produce a 3D semantic volume. This volume can be combined with existing 3D mesh reconstruction networks to produce a 3D semantic mesh. Experimental results on real-world datasets demonstrate that the proposed method significantly increases 3D semantic estimation accuracy.
Funder
National Research Foundation of Korea Institute of Information & communications Technology Planning & Evaluation
Reference34 articles.
1. RGB-D semantic segmentation and label-oriented voxelgrid fusion for accurate 3D semantic mapping;Shi;IEEE Trans. Circuits Syst. Video Technol.,2021 2. Live semantic 3d perception for immersive augmented reality;Han;IEEE Trans. Vis. Comput. Graph.,2020 3. Kundu, A., Genova, K., Yin, X., Fathi, A., Pantofaru, C., Guibas, L.J., Tagliasacchi, A., Dellaert, F., and Funkhouser, T. (2022, January 21–24). Panoptic neural fields: A semantic object-aware neural scene representation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA. 4. Joint stereo 3D object detection and implicit surface reconstruction;Li;Sci. Rep.,2024 5. Shao, H., Wang, L., Chen, R., Li, H., and Liu, Y. (2023, January 6–9). Safety-enhanced autonomous driving using interpretable sensor fusion transformer. Proceedings of the Conference on Robot Learning, Atlanta, GA, USA.
|
|