Affiliation:
1. MOE Key Laboratory of Trustworthy Distributed Computing and Service, Beijing University of Posts and Telecommunications, Beijing, China
Abstract
Document-level relation extraction (RE) aims to simultaneously predict relations (including no-relation cases denoted as NA) between all entity pairs in a document. It is typically formulated as a relation classification task with entities pre-detected in advance and solved by a hard-label training regime, which, however, neglects the divergence of the NA class and the correlations among other classes. This article introduces
progressive self-distillation
(PSD), a new training regime that employs online, self-knowledge distillation (KD) to produce and incorporate soft labels for document-level RE.The key idea of PSD is to gradually soften hard labels using past predictions from an RE model itself, which are adjusted adaptively as training proceeds. As such, PSD has to learn only one RE model within a single training pass, requiring no extra computation or annotation to pretrain another high-capacity teacher. PSD is conceptually simple, easy to implement, and generally applicable to various RE models to further improve their performance, without introducing additional parameters or significantly increasing training overheads into the models. It is also a general framework that can be flexibly extended to distilling various types of knowledge, rather than being restricted to soft labels themselves. Extensive experiments on four benchmarking datasets verify the effectiveness and generality of the proposed approach. The code is available at
https://github.com/GaoJieCN/psd
Funder
National Natural Science Foundation of China
Publisher
Association for Computing Machinery (ACM)
Reference80 articles.
1. Rohan Anil, Gabriel Pereyra, Alexandre Passos, Robert Ormandi, George E. Dahl, and Geoffrey E. Hinton. 2018. Large scale distributed neural network training through online distillation. In Proceedings of the International Conference on Learning Representations. Retrieved from https://arxiv.org/abs/1804.03235
2. Jimmy Ba and Rich Caruana. 2014. Do deep nets really need to be deep? In Advances in Neural Information Processing Systems, Vol. 27. Retrieved from https://proceedings.neurips.cc/paper/2014/file/ea8fcd92d59581717e06eb187f10666d-Paper.pdf
3. Matching the Blanks: Distributional Similarity for Relation Learning
4. SciBERT: A Pretrained Language Model for Scientific Text
5. Bidirectional Recurrent Convolutional Neural Network for Relation Classification