Affiliation:
1. Gaoling School of Artificial Intelligence, Renmin University of China
2. Beijing Key Laboratory of Big Data Management and Analysis Methods
3. School of Information, Renmin University of China
Abstract
Text generation has become one of the most important yet challenging tasks in natural language processing (NLP). The resurgence of deep learning
has greatly advanced this field by neural generation models, especially the paradigm of pretrained language models (PLMs). In this paper, we present
an overview of the major advances achieved in the topic of PLMs for text generation. As the preliminaries, we present the general task definition
and briefly describe the mainstream architectures of PLMs for text generation. As the core content, we discuss how to adapt existing PLMs to model
different input data and satisfy special properties in the generated text. We further summarize several important fine-tuning strategies for text generation.
Finally, we present several future directions and conclude this paper. Our survey aims to provide text generation researchers a synthesis and pointer to related research.
Publisher
International Joint Conferences on Artificial Intelligence Organization
Cited by
56 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献