FREDA: Few-Shot Relation Extraction Based on Data Augmentation
-
Published:2023-07-18
Issue:14
Volume:13
Page:8312
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Liu Junbao1, Qin Xizhong12, Ma Xiaoqin1, Ran Wensheng3
Affiliation:
1. College of Information Science and Engineering, Xinjiang University, Urumqi 830049, China 2. Xinjiang Signal Detection and Processing Key Laboratory, Urumqi 830049, China 3. Xinjiang Uygur Autonomous Regin Product Quality Supervision and Inspection Institute, Urumqi 830049, China
Abstract
The primary task of few-shot relation extraction is to quickly learn the features of relation classes from a few labelled instances and predict the semantic relations between entity pairs in new instances. Most existing few-shot relation extraction methods do not fully utilize the relation information features in sentences, resulting in difficulties in improving the performance of relation classification. Some researchers have attempted to incorporate external information, but the results have been unsatisfactory when applied to different domains. In this paper, we propose a method that utilizes triple information for data augmentation, which can alleviate the issue of insufficient instances and possesses strong domain adaptation capabilities. Firstly, we extract relation and entity pairs from the instances in the support set, forming relation triple information. Next, the sentence information and relation triple information are encoded using the same sentence encoder. Then, we construct an interactive attention module to enable the query set instances to interact separately with the support set instances and relation triple instances. The module pays greater attention to highly interactive parts between instances and assigns them higher weights. Finally, we merge the interacted support set representation and relation triple representation. To our knowledge, we are the first to propose a method that utilizes triple information for data augmentation in relation extraction. In our experiments on the standard datasets FewRel1.0 and FewRel2.0 (domain adaptation), we observed substantial improvements without including external information.
Funder
Xinjiang Uygur Autonomous Region Xinjiang region
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference49 articles.
1. Lin, Y., Liu, Z., Sun, M., Liu, Y., and Zhu, X. (2015, January 25–30). Learning entity and relation embeddings for knowledge graph completion. Proceedings of the AAAI Conference on Artificial Intelligence, Austin, TX, USA. 2. Zeng, D., Liu, K., Lai, S., Zhou, G., and Zhao, J. (2014). Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, Dublin City University and Association for Computational Linguistics. 3. Xiao, S., Liu, Z., Han, W., Zhang, J., Shao, Y., Lian, D., Li, C., Sun, H., Deng, D., and Zhang, L. (2022, January 25–29). Progressively Optimized Bi-Granular Document Representation for Scalable Embedding Based Retrieval. Proceedings of the ACM Web Conference, Virtual Event, Lyon, France. 4. Chen, X., Xu, J., and Xu, B. (August, January 28). A working memory model for task-oriented dialog response generation. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy. 5. Yasunaga, M., Ren, H., Bosselut, A., Liang, P., and Leskovec, J. (2021). QA-GNN: Reasoning with language models and knowledge graphs for question answering. arXiv.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|