Affiliation:
1. LMU Munich, Frauenlobstr. 7a, 80337 Munich, Germany
Abstract
In human–robot interaction, transparency is essential to ensure that humans understand and trust robots. Understanding is vital from an ethical perspective and benefits interaction, e.g., through appropriate trust. While there is research on explanations and their content, the methods used to convey the explanations are underexplored. It remains unclear which approaches are used to foster understanding. To this end, we contribute a systematic literature review exploring how robot transparency is fostered in papers published in the ACM Digital Library and IEEE Xplore. We found that researchers predominantly rely on monomodal visual or verbal explanations to foster understanding. Commonly, these explanations are external, as opposed to being integrated in the robot design. This paper provides an overview of how transparency is communicated in human–robot interaction research and derives a classification with concrete recommendations for communicating transparency. Our results establish a solid base for consistent, transparent human–robot interaction designs.
Subject
Computer Networks and Communications,Computer Science Applications,Human-Computer Interaction,Neuroscience (miscellaneous)
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献