Author:
Liu Xiaoyong,Wen Handong,Xu Chunlin,Du Zhiguo,Li Huihui,Hu Miao
Funder
National Natural Science Foundation of China
The Ministry of education of Humanities and Social Science project
Guangdong Science and Technology Project
Guangzhou Science and Technology Planning Project
Guangdong Basic and Applied Basic Research Foundation
Project of Education Department of Guangdong Province
Publisher
Springer Science and Business Media LLC
Reference43 articles.
1. Liu P, Yuan W, Fu J, Jiang Z, Hayashi H, Neubig G (2023) Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Comput Surv 55(9):1–35
2. Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A et al (2020) Language models are few-shot learners. Adv Neural Inf Process Syst 33:1877–1901
3. Han X, Zhao W, Ding N, Liu Z, Sun M (2022) Ptr: Prompt tuning with rules for text classification. AI Open 3:182–192
4. Chen X, Zhang N, Xie X, Deng S, Yao Y, Tan C, Huang F, Si L, Chen H (2022) Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM Web Conference 2022, pp. 2778–2788
5. Huffman SB (1995) Learning information extraction patterns from examples. In: International Joint Conference on Artificial Intelligence, pp. 246–260. Springer