Zero-shot style transfer for gesture animation driven by text and speech using adversarial disentanglement of multimodal style encoding

Author:

Fares Mireille,Pelachaud Catherine,Obin Nicolas

Abstract

Modeling virtual agents with behavior style is one factor for personalizing human-agent interaction. We propose an efficient yet effective machine learning approach to synthesize gestures driven by prosodic features and text in the style of different speakers including those unseen during training. Our model performs zero-shot multimodal style transfer driven by multimodal data from the PATS database containing videos of various speakers. We view style as being pervasive; while speaking, it colors the communicative behaviors expressivity while speech content is carried by multimodal signals and text. This disentanglement scheme of content and style allows us to directly infer the style embedding even of a speaker whose data are not part of the training phase, without requiring any further training or fine-tuning. The first goal of our model is to generate the gestures of a source speaker based on thecontentof two input modalities–Mel spectrogram and text semantics. The second goal is to condition the source speaker's predicted gestures on the multimodal behaviorstyleembedding of a target speaker. The third goal is to allow zero-shot style transfer of speakers unseen during training without re-training the model. Our system consists of two main components: (1) aspeaker style encoder networkthat learns to generate a fixed-dimensional speaker embeddingstylefrom a target speaker multimodal data (mel-spectrogram, pose, and text) and (2) asequence-to-sequence synthesis networkthat synthesizes gestures based on thecontentof the input modalities—text and mel-spectrogram—of a source speaker and conditioned on the speaker style embedding. We evaluate that our model is able to synthesize gestures of a source speaker given the two input modalities and transfer the knowledge of target speaker style variability learned by the speaker style encoder to the gesture generation task in a zero-shot setup, indicating that the model has learned a high-quality speaker representation. We conduct objective and subjective evaluations to validate our approach and compare it with baselines.

Publisher

Frontiers Media SA

Subject

Artificial Intelligence

Reference43 articles.

1. “Low-resource adaptation for personalized co-speech gesture generation,”;Ahuja;Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),2022

2. “Style transfer for co-speech gesture animation: A multi-speaker conditional-mixture approach,”;Ahuja,2020

3. To react or not to react: End-to-end visual pose forecasting for personalized avatar during dyadic conversations,”;Ahuja;2019 International Conference on Multimodal Interaction,2019

4. “Style-controllable speech-driven gesture synthesis using normalising flows,”;Alexanderson,2020

5. Language style as audience design;Bell;Langu. Soc,1984

Cited by 3 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Towards the generation of synchronized and believable non-verbal facial behaviors of a talking virtual agent;International Cconference on Multimodal Interaction;2023-10-09

2. Large language models in textual analysis for gesture selection;INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION;2023-10-09

3. A Comprehensive Review of Data‐Driven Co‐Speech Gesture Generation;Computer Graphics Forum;2023-05

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3