Affiliation:
1. Beijing University of Posts and Telecommunications, Beijing, China
Funder
National Key Research and Development Program of China
National Natural Science Foundation of China
Reference44 articles.
1. CLIP-based image captioning via unsupervised cycle-consistency in the latent space
2. Food-101 – Mining Discriminative Components with Random Forests
3. Tom Brown Benjamin Mann Nick Ryder Melanie Subbiah Jared D Kaplan Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell et al. 2020. Language models are few-shot learners. Advances in neural information processing systems Vol. 33 (2020) 1877--1901.
4. Prompting or Fine-tuning? A Comparative Study of Large Language Models for Taxonomy Construction
5. Guangyi Chen Weiran Yao Xiangchen Song Xinyue Li Yongming Rao and Kun Zhang. 2023 a. Prompt Learning with Optimal Transport for Vision-Language Models. In ICLR.