Affiliation:
1. Division of Speech, Music and Hearing (TMH), KTH Royal Institute of Technology, Sweden
Funder
Korean Ministry of Trade, Industry and Energy (MOTIE)
Digital Futures (AAIS)
Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation
Reference46 articles.
1. Style‐Controllable Speech‐Driven Gesture Synthesis Using Normalising Flows
2. Listen, Denoise, Action! Audio-Driven Motion Synthesis with Diffusion Models
3. Tenglong Ao Zeyi Zhang and Libin Liu. [n. d.]. GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents. ACM Trans. Graph. ([n. d.]) 18 pages. https://doi.org/10.1145/3592097 10.1145/3592097
Tenglong Ao Zeyi Zhang and Libin Liu. [n. d.]. GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents. ACM Trans. Graph. ([n. d.]) 18 pages. https://doi.org/10.1145/3592097
4. Alexei Baevski , Wei-Ning Hsu , Qiantong Xu , Arun Babu , Jiatao Gu , and Michael Auli . 2022 . Data2vec: A general framework for self-supervised learning in speech, vision and language . In International Conference on Machine Learning. PMLR, 1298–1312 . Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, and Michael Auli. 2022. Data2vec: A general framework for self-supervised learning in speech, vision and language. In International Conference on Machine Learning. PMLR, 1298–1312.
5. BEAT
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Semantic Gesticulator: Semantics-Aware Co-Speech Gesture Synthesis;ACM Transactions on Graphics;2024-07-19
2. State of the Art on Diffusion Models for Visual Computing;Computer Graphics Forum;2024-04-30
3. Unified Speech and Gesture Synthesis Using Flow Matching;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14
4. Minimal Latency Speech-Driven Gesture Generation for Continuous Interaction in Social XR;2024 IEEE International Conference on Artificial Intelligence and eXtended and Virtual Reality (AIxVR);2024-01-17
5. The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic settings;INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION;2023-10-09