1. A. Radford, J. Kim, C. Hallacy, A. Ramesh, G. Goh, , “Learning Transferable Visual Models from Natural Language Supervision,” in International Conference on Machine Learning, Jul. 2021. 2.
2. J. Li, D. Li, C. Xiong, S. Hoi, “BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation,” in International Conference on Machine Learning, 2021.
3. H. Bao, L. Dong, F. Wei, “BEiT: BERT Pre-Training of Image Transformers,” in International Conference on Learning Representations, Jun, 2021.
4. Scaling Instruction-Finetuned Language Models;Chung H.;Journal of Machine Learning Research, Oct,2024
5. A Survey of Vision-Language Pre-Trained Models