Affiliation:
1. Institute of Management Science, TU Wien, Austria
Abstract
This paper addresses the question of whether robots should adhere to the same social norms that apply to human-human interaction when they explain their behavior. Specifically, this paper investigates how the topics of ascribing intentions to robots’ behavior, and robots’ explainability intertwine in the context of social interactions. We argue that robots should be able to contextually guide users towards adopting the most appropriate interpretative framework by providing explanations that refer to intentions, reasons and objectives as well as different kinds causes (e.g., mechanical, accidental, etc.). We support our argument with use cases grounded in real-world applications.