Affiliation:
1. Department of Sciences and Methods for Engineering, University of Modena and Reggio Emilia, 42122 Reggio Emilia, Italy
2. Department of Engineering, King’s College London, London WC2R 2LS, UK
Abstract
Extended reality (XR) systems are about to be integrated into our daily lives and will provide support in a variety of fields such as education and coaching. Enhancing user experience demands agents that are capable of displaying realistic affective and social behaviors within these systems, and, as a prerequisite, with the capability of understanding their interaction partner and responding appropriately. Based on our literature review of recent works published in the field of co-speech gesture generation, researchers have developed complex models capable of generating gestures characterized by a high level of human-likeness and speaker appropriateness. Nevertheless, this is only true in settings where the agent has an active status (i.e., the agent acts as the speaker), or it is delivering a monologue in a non-interactive setting. However, as illustrated in multiple works and competitions like the GENEA Challenge, these models remain inadequate in generating interlocutor-aware gestures. We consider interlocutor-aware gesture generation the process of displaying gestures that take into account the conversation partner’s behavior. Moreover, in settings where the agent is the listener, generated gestures lack the level of naturalness that we expect from a face-to-face conversation. To overcome these issues, we have designed a pipeline, called TAG2G, composed of a diffusion model, which was demonstrated to be a stable and powerful tool in gesture generation, and a vector-quantized variational auto-encoder (VQVAE), widely employed to produce meaningful gesture embeddings. Refocusing from monadic to dyadic multimodal input settings (i.e., taking into account text, audio, and previous gestures of both participants of a conversation) allows us to explore and infer the complex interaction mechanisms that lie in a balanced two-sided conversation. As per our results, a multi-agent conversational input setup improves the generated gestures’ appropriateness with respect to the conversational counterparts. Conversely, when the agent is speaking, a monadic approach performs better in terms of the generated gestures’ appropriateness in relation to the speech.
Funder
European Union’s Horizon Europe Research and Innovation Program
Reference41 articles.
1. A Comprehensive Review of Data-Driven Co-Speech Gesture Generation;Nyatsanga;Computer Graphics Forum,2023
2. The role of empathy for learning in complex Science| Environment| Health contexts;Zeyer;Int. J. Sci. Educ.,2019
3. The impact of the teachers’ non-verbal communication on success in teaching;Bambaeeroo;J. Adv. Med. Educ. Prof.,2017
4. The cognitive affective model of immersive learning (CAMIL): A theoretical research-based model of learning in immersive virtual reality;Makransky;Educ. Psychol. Rev.,2021
5. Kucherenko, T., Nagy, R., Yoon, Y., Woo, J., Nikolov, T., Tsakov, M., and Henter, G.E. (2023, January 9–13). The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic settings. Proceedings of the 25th International Conference on Multimodal Interaction, Paris, France.