Author:
Cai Yuchen,Cao Ding,Guo Rongxi,Wen Yaqin,Liu Guiquan,Chen Enhong,Zhang Jingyun
Publisher
Springer Nature Singapore
Reference32 articles.
1. Bang, Y., Cahyawijaya, S., Lee, N., Dai, W., Su, D., Wilie, B., Lovenia, H., Ji, Z., Yu, T., Chung, W., et al.: A multitask, multilingual, multimodal evaluation of chat-gpt on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023 (2023)
2. Brown, T., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020)
3. Cheng, S., Tian, B., Liu, Q., Chen, X., Wang, Y., Chen, H., Zhang, N.: Can we edit multimodal large language models? arXiv preprint arXiv:2310.08475 (2023)
4. Cheng, S., Zhang, N., Tian, B., Dai, Z., Xiong, F., Guo, W., Chen, H.: Editing language model-based knowledge graph embeddings. arXiv preprint arXiv:2301.10405 (2023)
5. Dai, D., Sun, Y., Dong, L., Hao, Y., Ma, S., Sui, Z., Wei, F.: Why can gpt learn in-context? language models implicitly perform gradient descent as meta-optimizers. In: ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation Models (2023)