Affiliation:
1. Linköping University, Sweden
Abstract
The explainability of robotic systems depends on people’s ability to reliably attribute perceptual beliefs to robots, i.e., what robots know (or believe) about objects and events in the world based on their perception. However, the perceptual systems of robots are not necessarily well understood by the majority of people interacting with them. In this article, we explain why this is a significant, difficult, and unique problem in social robotics. The inability to judge what a robot knows (and does not know) about the physical environment it shares with people gives rise to a host of communicative and interactive issues, including difficulties to communicate about objects or adapt to events in the environment. The challenge faced by social robotics researchers or designers who want to facilitate appropriate attributions of perceptual beliefs to robots is to shape human–robot interactions so that people understand what robots know about objects and events in the environment. To meet this challenge, we argue, it is necessary to advance our knowledge of when and why people form incorrect or inadequate mental models of robots’ perceptual and cognitive mechanisms. We outline a general approach to studying this empirically and discuss potential solutions to the problem.
Funder
Excellence Center at Linköping-Lund in Information Technology
ELLIIT
Publisher
Association for Computing Machinery (ACM)
Subject
Artificial Intelligence,Human-Computer Interaction
Cited by
22 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献