Funder
National Natural Science Foundation of China
Reference63 articles.
1. Xiang Lisa Li, Percy Liang, Prefix-Tuning: Optimizing Continuous Prompts for Generation, in: ACL/IJCNLP, 2021, pp. 4582–4597.
2. Brian Lester, Rami Al-Rfou, Noah Constant, The Power of Scale for Parameter-Efficient Prompt Tuning, in: EMNLP, 2021, pp. 3045–3059.
3. Zexuan Zhong, Dan Friedman, Danqi Chen, Factual Probing Is [MASK]: Learning vs. Learning to Recall, in: NAACL-HLT, 2021, pp. 5017–5033.
4. Learning to prompt for vision-language models;Zhou;Int. J. Comput. Vis.,2022
5. Kaiyang Zhou, Jingkang Yang, Chen Change Loy, Ziwei Liu, Conditional Prompt Learning for Vision-Language Models, in: CVPR, 2022, pp. 16816–16825.