Author:
Sasikumar Prasanth,Chittajallu Soumith,Raj Navindd,Bai Huidong,Billinghurst Mark
Abstract
Conventional training and remote collaboration systems allow users to see each other’s faces, heightening the sense of presence while sharing content like videos or slideshows. However, these methods lack depth information and a free 3D perspective of the training content. This paper investigates the impact of volumetric playback in a Mixed Reality (MR) spatial training system. We describe the MR system in a mechanical assembly scenario that incorporates various instruction delivery cues. Building upon previous research, four spatial instruction cues were explored; “Annotation”, “Hand gestures”, “Avatar”, and “Volumetric playback”. Through two user studies that simulated a real-world mechanical assembly task, we found that the volumetric visual cue enhanced spatial perception in the tested MR training tasks, exhibiting increased co-presence and system usability while reducing mental workload and frustration. We also found that the given tasks required less effort and mental load when eye gaze was incorporated. Eye gaze on its own was not perceived to be very useful, but it helped to compliment the hand gesture cues. Finally, we discuss limitations, future work and potential applications of our system.
Reference48 articles.
1. Visualization of Off-Surface 3D Viewpoint Locations in Spatial Augmented Reality;Adcock,2013
2. A Study of Gestures in a Video-Mediated Collaborative Assembly Task;Alem;Adv. Human-Computer Interaction,2011
3. "Where Are You Pointing at?" A Study of Remote Collaboration in a Wearable Videoconference System;Bauer,1999
4. Collaborative Mixed Reality;Billinghurst,1999
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献