Author:
Fang Wenlong,Ouyang Chunping,Lin Qiang,Yuan Yue
Abstract
ABSTRACT
In this paper, we study cross-domain relation extraction. Since new data mapping to feature spaces always differs from the previously seen data due to a domain shift, few-shot relation extraction often perform poorly. To solve the problems caused by cross-domain, we propose a method for combining the pure entity, relation labels and adversarial (PERLA). We first use entities and complete sentences for separate encoding to obtain context-independent entity features. Then, we combine relation labels which are useful for relation extraction to mitigate context noise. We combine adversarial to reduce the noise caused by cross-domain. We conducted experiments on the publicly available cross-domain relation extraction dataset Fewrel 2.0[1]①, and the results show that our approach improves accuracy and has better transferability for better adaptation to cross-domain tasks.
Subject
Artificial Intelligence,Library and Information Sciences,Computer Science Applications,Information Systems
Reference25 articles.
1. FewRel 2.0: Towards more challenging few-shot relation classification;Gao,2019
2. Fewrel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation;Han,2018
3. Meta-learning with memory-augmented neural networks;Santoro,2016
4. Matching networks for one shot learning;Vinyals,2016
5. Learning to compare: Relation network for few-shot learning;Sung,2018