1. Han, X., et al.: Pre-trained models: past, present and future. AI Open 2, 225–250 (2021)
2. Jing, L., Tian, Y.: Self-supervised visual feature learning with deep neural networks: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 43, 4037–4058 (2021)
3. Radford, A., Narasimhan, K.: Improving language understanding by generative pre-training (2018)
4. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: North American Chapter of the Association for Computational Linguistics (2018)
5. Li, B., Weng, Y., Xia, F., Deng, H.: Towards better Chinese-centric neural machine translation for low-resource languages. arXiv preprint arXiv:2204.04344 (2022)