Author:
Wei Chao,Chen Yanping,Wang Kai,Qin Yongbin,Huang Ruizhang,Zheng Qinghua
Abstract
AbstractPrompt-tuning has been successfully applied to support classification tasks in natural language processing and has achieved promising performance. The main characteristic of prompt-tuning based classification is to verbalize class labels and predict masked tokens like a cloze-like task. It has the advantage to make use of knowledge in pre-trained language models (PLMs). Because prompt templates are manually designed, they are more prone to overfitting. Furthermore, traditional prompt templates are appended in the tail of an original sentence. They are far from some semantic units in a sentence. It is weak to decode semantic information of an input relevant to PLMs. To aggregate more semantic information from PLMs for masked token prediction, we propose an annotation-aware prompt-tuning model for relation extraction. In our method, entity type representations are used as entity annotations. They are implanted near the site of entities in a sentence for decoding semantic information of PLMs. It is effective to make full use of knowledge in PLMs for relation extraction. In the experiment section, our method is validated on the Chinese literature text and SemEval 2010 Task datasets and achieves 89.3% and 90.6% in terms of F1-score, respectively. It achieves the state-of-the-art performance on two public datasets. The result further demonstrates the effectiveness of our model to decode semantic information in PLMs.
Funder
National Natural Science Foundation of China
Publisher
Springer Science and Business Media LLC
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Context-aware generative prompt tuning for relation extraction;International Journal of Machine Learning and Cybernetics;2024-06-17