Prompt Learning with Structured Semantic Knowledge Makes Pre-Trained Language Models Better
-
Published:2023-07-30
Issue:15
Volume:12
Page:3281
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Zheng Hai-Tao12ORCID, Xie Zuotong1, Liu Wenqiang3, Huang Dongxiao3, Wu Bei3, Kim Hong-Gee4
Affiliation:
1. Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China 2. Pengcheng Laboratory, Shenzhen 518055, China 3. Interactive Entertainment Group, Tencent Inc., Shenzhen 518057, China 4. School of Dentistry, Seoul National University, Seoul 03080, Republic of Korea
Abstract
Pre-trained language models with structured semantic knowledge have demonstrated remarkable performance in a variety of downstream natural language processing tasks. The typical methods of integrating knowledge are designing different pre-training tasks and training from scratch, which requires high-end hardware, massive storage resources, and long computing times. Prompt learning is an effective approach to tuning language models for specific tasks, and it can also be used to infuse knowledge. However, most prompt learning methods accept one token as the answer, instead of multiple tokens. To tackle this problem, we propose the long-answer prompt learning method (KLAPrompt), with three different long-answer strategies, to incorporate semantic knowledge into pre-trained language models, and we compare the performance of these three strategies through experiments. We also explore the effectiveness of the KLAPrompt method in the medical field. Additionally, we generate a word sense prediction dataset (WSP) based on the Xinhua Dictionary and a disease and category prediction dataset (DCP) based on MedicalKG. Experimental results show that discrete answers with the answer space partitioning strategy achieve the best results, and introducing structured semantic information can consistently improve language modeling and downstream tasks.
Funder
National Natural Science Foundation of China Research Center for Computer Network (Shenzhen) Ministry of Education Beijing Academy of Artificial Intelligence Natural Science Foundation of Guangdong Province Basic Research Fund of Shenzhen City Major Key Project of PCL for Experiments and Applications Overseas Cooperation Research Fund of Tsinghua Shenzhen International Graduate School
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Reference45 articles.
1. Devlin, J., Chang, M.W., Lee, K., and Toutanova, K. (2019, January 2–7). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, MN, USA. 2. XLNet: Generalized Autoregressive Pretraining for Language Understanding;Yang;Adv. Neural Inf. Process. Syst.,2019 3. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv. 4. Zellers, R., Bisk, Y., Schwartz, R., and Choi, Y. (November, January 31). SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. 5. Pre-trained models for natural language processing: A survey;Qiu;Sci. China Technol. Sci.,2020
|
|