Author:
Tan Yiming,Min Dehai,Li Yu,Li Wenbo,Hu Nan,Chen Yongrui,Qi Guilin
Publisher
Springer Nature Switzerland
Reference54 articles.
1. Bai, Y., et al.: Benchmarking foundation models with language-model-as-an-examiner. arXiv preprint arXiv:2306.04181 (2023)
2. Bang, Y., et al.: A multitask, multilingual, multimodal evaluation of ChatGPT on reasoning, hallucination, and interactivity. arXiv e-prints, arXiv-2302 (2023)
3. Belinkov, Y., Glass, J.: Analysis methods in neural language processing: a survey. Trans. Assoc. Comput. Linguist. 7, 49–72 (2019)
4. Brown, T., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020)
5. Cao, S., et al.: KQA Pro: a dataset with explicit compositional programs for complex question answering over knowledge base. In: Proceedings ACL Conference, pp. 6101–6119 (2022)
Cited by
14 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献