CCLCap-AE-AVSS: Cycle consistency loss based capsule autoencoders for audio–visual speech synthesis

Author:

Ghosh Subhayu1,Jana Nanda Dulal1,Si Tapas2,Mallik Saurav3,Shah Mohd Asif456

Affiliation:

1. Department of Computer Science and Engineering, National Institute of Technology Durgapur , West Bengal 713209 , India

2. Department of Computer Science and Engineering, AI Innovation Lab, University of Engineering and Management , Jaipur , Rajasthan 303807 , India

3. Department of Environmental Health, Harvard T H Chan School of Public Health , Boston , MA 02115 , United States of America

4. Department of Economics, Kebri Dehar University , Jigjiga 3060 , Ethiopia

5. Centre of Research Impact and Outcome, Chitkara University Institute of Engineering and Technology, Chitkara University , Rajpura , 140401, Punjab , India

6. Chitkara Centre for Research and Development, Chitkara University , Baddi , 174103, Himachal Pradesh , India

Abstract

Abstract Audio–visual speech synthesis (AVSS) is a rapidly growing field in the paradigm of audio–visual learning, involving the conversion of one person’s speech into the audio–visual stream of another while preserving the speech content. AVSS comprises two primary components: voice conversion (VC), which alters the vocal characteristics from the source speaker to the target speaker, followed by audio–visual synthesis, which creates the audio–visual presentation of the converted VC output for the target speaker. Despite the progress in deep learning (DL) technologies, DL models in AVSS have received limited attention in existing literature. Therefore, this article presents a novel approach for AVSS utilizing capsule network (Caps-Net)-based autoencoders, with the incorporation of cycle consistency loss. Caps-Net addresses translation invariance issues in convolutional neural network approaches for effective feature capture. Additionally, the inclusion of cycle consistency loss ensures the retention of content information from the source speaker. The proposed approach is referred to as cycle consistency loss-based capsule autoencoders for audio–visual speech synthesis (CCLCap-AE-AVSS). The proposed CCLCap-AE-AVSS is trained and tested using VoxCeleb2 and LRS3-TED datasets. The subjective and objective assessments of the generated samples demonstrate the superior performance of the proposed work compared to the current state-of-the-art models.

Publisher

Walter de Gruyter GmbH

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3