Abstract
Recent studies have shown that deep learning achieves excellent performance in reconstructing 3D scenes from multiview images or videos. However, these reconstructions do not provide the identities of objects, and object identification is necessary for a scene to be functional in virtual reality or interactive applications. The objects in a scene reconstructed as one mesh are treated as a single object, rather than individual entities that can be interacted with or manipulated. Reconstructing an object-aware 3D scene from a single 2D image is challenging because the image conversion process from a 3D scene to a 2D image is irreversible, and the projection from 3D to 2D reduces a dimension. To alleviate the effects of dimension reduction, we proposed a module to generate depth features that can aid the 3D pose estimation of objects. Additionally, we developed a novel approach to mesh reconstruction that combines two decoders that estimate 3D shapes with different shape representations. By leveraging the principles of multitask learning, our approach demonstrated superior performance in generating complete meshes compared to methods relying solely on implicit representation-based mesh reconstruction networks (e.g., local deep implicit functions), as well as producing more accurate shapes compared to previous approaches for mesh reconstruction from single images (e.g., topology modification networks). The proposed method was evaluated on real-world datasets. The results showed that it could effectively improve the object-aware 3D scene reconstruction performance over existing methods.
Funder
National Research Foundation of Korea
Subject
General Mathematics,Engineering (miscellaneous),Computer Science (miscellaneous)
Reference56 articles.
1. Snap2cad: 3D indoor environment reconstruction for AR/VR applications using a smartphone device;Manni;Comput. Graph.,2021
2. 3D reconstruction and validation of historical background for immersive VR applications and games: The case study of the Forum of Augustus in Rome;Ferdani;J. Cult. Herit.,2020
3. Wang, Y., Guizilini, V.C., Zhang, T., Wang, Y., Zhao, H., and Solomon, J. (2022, January 8–11). Detr3d: 3d object detection from multi-view images via 3d-to-2d queries. Proceedings of the Conference on Robot Learning, PMLR, London, UK.
4. Monocular quasi-dense 3d object tracking;Hu;IEEE Trans. Pattern Anal. Mach. Intell.,2022
5. Saito, S., Simon, T., Saragih, J., and Joo, H. (2020, January 13–19). Pifuhd: Multi-level pixel-aligned implicit function for high-resolution 3d human digitization. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献