Affiliation:
1. Toyohashi University of Technology, Toyohashi, Aichi 441-8580, Japan
2. Shizuoka University, Hamamatsu, Shizuoka 432-8011, Japan
Abstract
In this paper, we describe a semantic interpreter and a cooperative response generator for a multimodal dialogue system which consists of speech input, touch screen input, speech output and graphical output. The dialogue system understands spontaneous speech which has many ambiguous phenomena such as interjections, ellipses, inversions, repairs, unknown words and so on, and responses to user for the utterance. But some utterances fail to be analyzed. This is caused due to "misrecognition" with the speech recognizer, "incompleteness" with the semantic interpreter and "lack of database" with the response generator. Therefore, we improved the semantic interpreter to build a more robust one. If a user's query does not have enough conditions/information to answer the question with the system, the dialogue manager should query the user to get the necessary conditions or to select the candidate. Further, if the system cannot retrieve any information related to the user's question, the generator should propose an alternative plan. Based on these considerations, we developed a cooperative response generator in the dialogue system. We report the evaluation results of the semantic interpreter and cooperative response generator in our dialogue system.
Publisher
World Scientific Pub Co Pte Lt
Subject
Artificial Intelligence,Computer Vision and Pattern Recognition,Software
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献