Virtual, Augmented, and Mixed Reality for Human-robot Interaction: A Survey and Virtual Design Element Taxonomy

Author:

Walker Michael1ORCID,Phung Thao2ORCID,Chakraborti Tathagata3ORCID,Williams Tom2ORCID,Szafir Daniel1ORCID

Affiliation:

1. University of North Carolina at Chapel Hill, United States

2. Colorado School of Mines, United States

3. IBM Research AI, United States

Abstract

Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI) has been gaining considerable attention in HRI research in recent years. However, the HRI community lacks a set of shared terminology and framework for characterizing aspects of mixed reality interfaces, presenting serious problems for future research. Therefore, it is important to have a common set of terms and concepts that can be used to precisely describe and organize the diverse array of work being done within the field. In this article, we present a novel taxonomic framework for different types of VAM-HRI interfaces, composed of four main categories of virtual design elements (VDEs). We present and justify our taxonomy and explain how its elements have been developed over the past 30 years as well as the current directions VAM-HRI is headed in the coming decade.

Funder

NSF

Publisher

Association for Computing Machinery (ACM)

Subject

Artificial Intelligence,Human-Computer Interaction

Reference108 articles.

1. Jordan Allspaw, Jonathan Roche, Nicholas Lemiesz, Michael Yannuzzi, and Holly A. Yanco. 2018. Remotely teleoperating a humanoid robot to perform fine motor tasks with virtual reality. In 1st International Workshop on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI’18).

2. Rasmus S. Andersen, Ole Madsen, Thomas B. Moeslund, and Heni Ben Amor. 2016. Projecting robot intentions into human environments. In 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN’16). IEEE, 294–301.

3. Stephanie Arévalo Arboleda, Tim Dierks, Franziska Rücker, and Jens Gerken. 2020. There’s more than meets the eye: Enhancing robot control through augmented visual cues. In ACM/IEEE International Conference on Human-Robot Interaction. 104–106.

4. A survey of robot learning from demonstration;Argall Brenna D.;Robot. Auton. Syst.,2009

5. David F. Arppe. 2020. UniNet: A mixed reality driving simulator. University of Ontario Institute of Technology (Canada) 2020. https://www.proquest.com/docview/2576937121?pq-origsite=gscholar&fromopenview=true.

Cited by 3 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Human Preferred Augmented Reality Visual Cues for Remote Robot Manipulation Assistance: from Direct to Supervisory Control;2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS);2023-10-01

2. SEAN-VR;Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction;2023-03-13

3. Now Look Here! $$\Downarrow $$ Mixed Reality Improves Robot Communication Without Cognitive Overload;Lecture Notes in Computer Science;2023

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3