Virtual Experience Toolkit: An End-to-End Automated 3D Scene Virtualization Framework Implementing Computer Vision Techniques
Author:
Mora Pau1ORCID, Garcia Clara1, Ivorra Eugenio1ORCID, Ortega Mario1, Alcañiz Mariano L.1
Affiliation:
1. Research in Human-Centred Technology University Research Institute, Universitat Politècnica de València, 46022 Valencia, Spain
Abstract
Virtualization plays a critical role in enriching the user experience in Virtual Reality (VR) by offering heightened realism, increased immersion, safer navigation, and newly achievable levels of interaction and personalization, specifically in indoor environments. Traditionally, the creation of virtual content has fallen under one of two broad categories: manual methods crafted by graphic designers, which are labor-intensive and sometimes lack precision; traditional Computer Vision (CV) and Deep Learning (DL) frameworks that frequently result in semi-automatic and complex solutions, lacking a unified framework for both 3D reconstruction and scene understanding, often missing a fully interactive representation of the objects and neglecting their appearance. To address these diverse challenges and limitations, we introduce the Virtual Experience Toolkit (VET), an automated and user-friendly framework that utilizes DL and advanced CV techniques to efficiently and accurately virtualize real-world indoor scenarios. The key features of VET are the use of ScanNotate, a retrieval and alignment tool that enhances the precision and efficiency of its precursor, supported by upgrades such as a preprocessing step to make it fully automatic and a preselection of a reduced list of CAD to speed up the process, and the implementation in a user-friendly and fully automatic Unity3D application that guides the users through the whole pipeline and concludes in a fully interactive and customizable 3D scene. The efficacy of VET is demonstrated using a diversified dataset of virtualized 3D indoor scenarios, supplementing the ScanNet dataset.
Funder
European Community’s Horizon 2020
Reference57 articles.
1. Virtual reality;Zheng;IEEE Potentials,1998 2. Yang, M.J., Guo, Y.X., Zhou, B., and Tong, X. (2021, January 11–18). Indoor scene generation from a collection of semantic-segmented depth images. Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada. 3. Kumar, H.G., Khargonkar, N.A., and Prabhakaran, B. (2024, May 30). ScanToVR: An RGB-D to VR Reconstruction Framework. Available online: https://bpb-us-e2.wpmucdn.com/sites.utdallas.edu/dist/f/1052/files/2023/03/final_draft_withnames.pdf. 4. Ipsita, A., Li, H., Duan, R., Cao, Y., Chidambaram, S., Liu, M., and Ramani, K. (2021, January 8–13). VRFromX: From scanned reality to interactive virtual experience with human-in-the-loop. Proceedings of the Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan. 5. Zhang, Y., Devalapalli, S., Mehta, S., and Caspi, A. (2023). OASIS: Automated Assessment of Urban Pedestrian Paths at Scale. arXiv.
|
|