Abstract
AbstractAs one of the most challenging and promising topics in speech field, emotion speech synthesis is a hot topic in current research. At present, the emotion expression ability, synthesis speed and robustness of synthetic speech need to be improved. Cycle-consistent Adversarial Networks (CycleGAN) provides a two-way breakthrough in the transformation of emotional corpus information. But there is still a gap between the real target and the synthesis speech. In order to narrow this gap, we propose an emotion speech synthesis method combining multi-channel Time–frequency Domain Generative Adversarial Networks (MC-TFD GANs) and Mixup. It includes three stages: multichannel Time–frequency Domain GANs (MC-TFD GANs), loss estimation based on Mixup and effective emotion region stacking based on Mixup. Among them, the gating unit GTLU (gated tanh linear units) and the image expression method of speech saliency region are designed. It combines the Time–frequency Domain MaskCycleGAN based on improved GTLU and the time-domain CycleGAN based on saliency region to form the multi-channel GAN in the first stage. Based on Mixup method, the calculation method of loss and the aggravation degree of emotion region are designed. Compared with several popular speech synthesis methods, the comparative experiments were carried out on the interactive emotional dynamic motion capture (IEMOCAP) corpus. The bi-directional three-layer long short-term memory (LSTM) model was used as the verification model. The experimental results showed that the mean opinion score (MOS) and the unweighted accuracy (UA) of the speech generated by the synthesis method were improved, and the improvements were 4% and 2.7%, respectively. The current model was superior to the existing GANs model in subjective evaluation and objective experiments, ensure that the speech generated by this model had higher reliability, better fluency and emotional expression ability.
Funder
the Dalian Science and Technology Star Project
the Intercollegiate cooperation projects of Liaoning Provincial Department of Education
Publisher
Springer Science and Business Media LLC
Reference29 articles.
1. Yao, Z.; Wang, Z.; Liu, W., et al.: Speech emotion recognition using fusion of three multi-task learning-based classifiers: HSF-DNN, MS-CNN and LLD-RNN - ScienceDirect[J]. Speech Commun. 120, 11–19 (2020)
2. Gayathri, P.; Pr Iya, P.G.; Sravani, L., et al.: Convolutional recurrent neural networks based speech emotion recognition[J]. J. Comput. Theor. Nanosci. 17, 3786–3789 (2020)
3. Schuller, B.W.: Speech emotion recognition two decades in a nutshell, benchmarks, and ongoing trends[J]. Commun. ACM 61(5), 90–99 (2018)
4. Mohammed, A.; Carlos, B.: Domain adversarial for acoustic emotion recognition [J]. IEEE/ACM Trans Audio, Speech, Language Process 26, 1–10 (2018)
5. Lu Y.; Mak M W.: Adversarial data augmentation network for speech emotion recognition[C]. In: 2019 Asia-Pacific signal and information processing association annual summit and conference (APSIPA ASC). IEEE (2019)
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献