Virtual Experience Toolkit: An End-to-End Automated 3D Scene Virtualization Framework Implementing Computer Vision Techniques

Author:

Mora Pau1ORCID,Garcia Clara1,Ivorra Eugenio1ORCID,Ortega Mario1,Alcañiz Mariano L.1

Affiliation:

1. Research in Human-Centred Technology University Research Institute, Universitat Politècnica de València, 46022 Valencia, Spain

Abstract

Virtualization plays a critical role in enriching the user experience in Virtual Reality (VR) by offering heightened realism, increased immersion, safer navigation, and newly achievable levels of interaction and personalization, specifically in indoor environments. Traditionally, the creation of virtual content has fallen under one of two broad categories: manual methods crafted by graphic designers, which are labor-intensive and sometimes lack precision; traditional Computer Vision (CV) and Deep Learning (DL) frameworks that frequently result in semi-automatic and complex solutions, lacking a unified framework for both 3D reconstruction and scene understanding, often missing a fully interactive representation of the objects and neglecting their appearance. To address these diverse challenges and limitations, we introduce the Virtual Experience Toolkit (VET), an automated and user-friendly framework that utilizes DL and advanced CV techniques to efficiently and accurately virtualize real-world indoor scenarios. The key features of VET are the use of ScanNotate, a retrieval and alignment tool that enhances the precision and efficiency of its precursor, supported by upgrades such as a preprocessing step to make it fully automatic and a preselection of a reduced list of CAD to speed up the process, and the implementation in a user-friendly and fully automatic Unity3D application that guides the users through the whole pipeline and concludes in a fully interactive and customizable 3D scene. The efficacy of VET is demonstrated using a diversified dataset of virtualized 3D indoor scenarios, supplementing the ScanNet dataset.

Funder

European Community’s Horizon 2020

Publisher

MDPI AG

Reference57 articles.

1. Virtual reality;Zheng;IEEE Potentials,1998

2. Yang, M.J., Guo, Y.X., Zhou, B., and Tong, X. (2021, January 11–18). Indoor scene generation from a collection of semantic-segmented depth images. Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada.

3. Kumar, H.G., Khargonkar, N.A., and Prabhakaran, B. (2024, May 30). ScanToVR: An RGB-D to VR Reconstruction Framework. Available online: https://bpb-us-e2.wpmucdn.com/sites.utdallas.edu/dist/f/1052/files/2023/03/final_draft_withnames.pdf.

4. Ipsita, A., Li, H., Duan, R., Cao, Y., Chidambaram, S., Liu, M., and Ramani, K. (2021, January 8–13). VRFromX: From scanned reality to interactive virtual experience with human-in-the-loop. Proceedings of the Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan.

5. Zhang, Y., Devalapalli, S., Mehta, S., and Caspi, A. (2023). OASIS: Automated Assessment of Urban Pedestrian Paths at Scale. arXiv.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3