Affiliation:
1. National Engineering Laboratory for Speech and Language Information Processing, University of Science and Technology of China, Hefei, P.R. China
2. Department of Electrical and Computer Engineering, Queen’s University, Kingston, Canada
Abstract
In this article, conditional-transforming variational autoencoders (CTVAEs) are proposed for generating diverse short text conversations. In conditional variational autoencoders (CVAEs), the prior distribution of latent variable z follows a multivariate Gaussian distribution with mean and variance modulated by the input conditions. Previous work found that this distribution tended to become condition-independent in practical applications. Thus, this article designs CTVAEs to enhance the influence of conditions in CVAEs. In a CTVAE model, the latent variable z is sampled by performing a non-linear transformation on the combination of the input conditions and the samples from a condition-independent prior distribution N (0, I). In our experiments using a Chinese Sina Weibo dataset, the CTVAE model derives z samples for decoding with better condition-dependency than that of the CVAE model. The earth mover’s distance (EMD) between the distributions of the latent variable z at the training stage, and the testing stage is also reduced by using the CTVAE model. In subjective preference tests, our proposed CTVAE model performs significantly better than CVAE and sequence-to-sequence (Seq2Seq) models on generating diverse, informative, and topic-relevant responses.
Funder
National Nature Science Foundation of China
Publisher
Association for Computing Machinery (ACM)
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献