Affiliation:
1. Tufts University
2. University of Central Florida
3. Stanford University
Abstract
As social robots increase in capabilities and become ubiquitous parts of the environment, there will be more conflicts between humans and technological agents. Conflict is not necessarily bad: it can provide opportunities for sharing information, calibrating trust, and establishing common situation awareness—provided the conflict plays out in a reasonable manner. Social robots should be designed to act in conflict with humans gracefully and artfully, in order to use conflict as a mode of communication, limiting the adverse effects of confusion, frustration, and deadlock.