Abstract
In order to recreate viable and human-like conversational responses, the artificial entity, i.e., an embodied conversational agent, must express correlated speech (verbal) and gestures (non-verbal) responses in spoken social interaction. Most of the existing frameworks focus on intent planning and behavior planning. The realization, however, is left to a limited set of static 3D representations of conversational expressions. In addition to functional and semantic synchrony between verbal and non-verbal signals, the final believability of the displayed expression is sculpted by the physical realization of non-verbal expressions. A major challenge of most conversational systems capable of reproducing gestures is the diversity in expressiveness. In this paper, we propose a method for capturing gestures automatically from videos and transforming them into 3D representations stored as part of the conversational agent’s repository of motor skills. The main advantage of the proposed method is ensuring the naturalness of the embodied conversational agent’s gestures, which results in a higher quality of human-computer interaction. The method is based on a Kanade–Lucas–Tomasi tracker, a Savitzky–Golay filter, a Denavit–Hartenberg-based kinematic model and the EVA framework. Furthermore, we designed an objective method based on cosine similarity instead of a subjective evaluation of synthesized movement. The proposed method resulted in a 96% similarity.
Funder
Slovenian Research Agency, Young Researcher Funding
Slovenian Research Agency
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference58 articles.
1. Communicative intent modulates production and comprehension of actions and gestures: A Kinect study;Trujillo;Cognition,2018
2. Two Sides of the Same Coin: Speech and Gesture Mutually Interact to Enhance Comprehension;Kelly;Psychol. Sci.,2010
3. Embodied Conversational Agents: Representation and Intelligence in User Interfaces;Cassell;AI Mag.,2001
4. Birdwhistell, R.L. (2010). Kinesics and Context: Essays on Body Motion Communication, University of Pennsylvania Press.
5. Design Features of Embodied Conversational Agents in eHealth: A Literature Review;Kramer;Int. J. Hum.-Comput. Stud.,2020
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献