Affiliation:
1. Lund University and University of Technology Sydney, Lund, Sweden
Abstract
Most semantic models employed in human-robot interactions concern how a robot can understand commands, but in this article the aim is to present a framework that allows dialogic interaction. The key idea is to use events as the fundamental structures for the semantic representations of a robot. Events are modeled in terms of conceptual spaces and mappings between spaces. It is shown how the semantics of major word classes can be described with the aid of conceptual spaces in a way that is amenable to computer implementations. An event is represented by two vectors, one force vector representing an action and one result vector representing the effect of the action. The two-vector model is then extended by the thematic roles so an event is built up from an agent, an action, a patient, and a result. It is shown how the components of an event can be put together to semantic structures that represent the meanings of sentences. It is argued that a semantic framework based on events can generate a general representational framework for human-robot communication. An implementation of the framework involving communication with an iCub will be described.
Publisher
Association for Computing Machinery (ACM)
Subject
Artificial Intelligence,Human-Computer Interaction
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献