Publisher
Springer Nature Singapore
Reference18 articles.
1. Murtadha, A., et al.: Rank-aware negative training for semi-supervised text classification. Trans. Assoc. Comput. Linguist. 11, 771–786 (2023)
2. Han, X., et al.: Pre-trained models: past, present and future. AI Open 2, 225–250 (2021)
3. Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019)
4. Kenton, J.D.M.W.C., Toutanova, L.K.: BERT: pre-training of deep bidirectional transformers for language understanding, pp. 4171–4186 (2019)
5. Lewis, M., et al.: BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pp. 7871–7880 (2020)