Full body video-based self-avatars for mixed reality: from E2E system to user study

Author:

Gonzalez Morin Diego,Gonzalez-Sosa EsterORCID,Perez Pablo,Villegas Alvaro

Abstract

AbstractIn this work, we explore the creation of self-avatars through video pass-through in mixed reality (MR) applications. We present our end-to-end system, including custom MR video pass-through implementation on a commercial head-mounted display (HMD), our deep learning-based real-time egocentric body segmentation algorithm, and our optimized offloading architecture, to communicate the segmentation server with the HMD. To validate this technology, we designed an immersive VR experience where the user has to walk through a narrow tile path over an active volcano crater. The study was performed under three-body representation conditions: virtual hands, video pass-through with color-based full-body segmentation, and video pass-through with deep learning full-body segmentation. This immersive experience was carried out by 30 women and 28 men. To the best of our knowledge, this is the first user study focused on evaluating video-based self-avatars to represent the user in a MR scene. Results showed no significant differences between the different body representations in terms of presence, with moderate improvements in some Embodiment components between the virtual hands and full-body representations. Visual Quality results showed better results from the deep-learning algorithms in terms of the whole body perception and overall segmentation quality. In this study, we provide some discussion regarding the use of video-based self-avatars and some reflections on the evaluation methodology. The proposed E2E solution is in the boundary of the state-of-the-art, so there is still room for improvement before it reaches maturity. However, this solution serves as a crucial starting point for MR applications where users can feel immersed and interact with their own bodies.

Funder

Marie Skłodowska-Curie ETN TeamUp5G

Publisher

Springer Science and Business Media LLC

Subject

Computer Graphics and Computer-Aided Design,Human-Computer Interaction,Software

Reference65 articles.

1. Alaee G, Deasi AP, Pena-Castillo L et al. (2018) A user study on augmented virtuality using depth sensing cameras for near-range awareness in immersive vr. In: IEEE VR’s 4th workshop on everyday virtual reality (WEVR 2018), p 3

2. Argelaguet F, Hoyet L, Trico M et al. (2016) The role of interaction in virtual embodiment: effects of the virtual hand representation. In: Proceedings of IEEE VR, pp 3–10

3. Arora N, Suomalainen M, Pouke M et al. (2022) Augmenting immersive telepresence experience with a virtual body. arXiv:2202.00900

4. Bazarevsky V, Grishchenko I, Raveendran K et al. (2020) Blazepose: on-device real-time body pose tracking. arXiv:2006.10204

5. Bhargava A, Venkatakrishnan R, Venkatakrishnan R et al. (2021) Did I hit the door effects of self-avatars and calibration in a person-plus-virtual-object system on perceived frontal passability in vr. IEEE Trans Vis Comput Graph 28:4198–4210

Cited by 3 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Exploring the Influence of Virtual Avatar Heads in Mixed Reality on Social Presence, Performance and User Experience in Collaborative Tasks;IEEE Transactions on Visualization and Computer Graphics;2024-05

2. Immersive Behavioral Therapy for Phobia Treatment in Individuals with Intellectual Disabilities;2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW);2024-03-16

3. An eXtended Reality Offloading IP Traffic Dataset and Models;IEEE Transactions on Mobile Computing;2023

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3