Author:
Wang Xingjin,Li Linjing,Zeng Daniel
Publisher
Springer Nature Singapore
Reference28 articles.
1. Adewoyin, R., Dutta, R., He, Y.: RSTGen: imbuing fine-grained interpretable control into long-FormText generators. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1822–1835. Association for Computational Linguistics, Seattle, United States, July 2022. https://doi.org/10.18653/v1/2022.naacl-main.133, https://aclanthology.org/2022.naacl-main.133
2. Alihosseini, D., Montahaei, E., Soleymani Baghshah, M.: Jointly measuring diversity and quality in text generation models. In: Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pp. 90–98. Association for Computational Linguistics, Minneapolis, Minnesota, June 2019. https://doi.org/10.18653/v1/W19-2311, https://aclanthology.org/W19-2311
3. Brown, T., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020)
4. Clive, J., Cao, K., Rei, M.: Control prefixes for text generation. arXiv preprint arXiv:2110.08329 (2021)
5. Denkowski, M., Lavie, A.: Meteor universal: language specific translation evaluation for any target language. In: Proceedings of the Ninth Workshop on Statistical Machine Translation, pp. 376–380. Association for Computational Linguistics, Baltimore, Maryland, USA, June 2014. https://doi.org/10.3115/v1/W14-3348, https://aclanthology.org/W14-3348