Author:
Liu Jiawei,Xiong Zi,Jiang Yi,Ma Yongqiang,Lu Wei,Huang Yong,Cheng Qikai
Abstract
Purpose
Fine-tuning pre-trained language models (PLMs), e.g. SciBERT, generally require large numbers of annotated data to achieve state-of-the-art performance on a range of NLP tasks in the scientific domain. However, obtaining fine-tuning data for scientific NLP tasks is still challenging and expensive. In this paper, the authors propose the mix prompt tuning (MPT), which is a semi-supervised method aiming to alleviate the dependence on annotated data and improve the performance of multi-granularity academic function recognition tasks.
Design/methodology/approach
Specifically, the proposed method provides multi-perspective representations by combining manually designed prompt templates with automatically learned continuous prompt templates to help the given academic function recognition task take full advantage of knowledge in PLMs. Based on these prompt templates and the fine-tuned PLM, a large number of pseudo labels are assigned to the unlabelled examples. Finally, the authors further fine-tune the PLM using the pseudo training set. The authors evaluate the method on three academic function recognition tasks of different granularity including the citation function, the abstract sentence function and the keyword function, with data sets from the computer science domain and the biomedical domain.
Findings
Extensive experiments demonstrate the effectiveness of the method and statistically significant improvements against strong baselines. In particular, it achieves an average increase of 5% in Macro-F1 score compared with fine-tuning, and 6% in Macro-F1 score compared with other semi-supervised methods under low-resource settings.
Originality/value
In addition, MPT is a general method that can be easily applied to other low-resource scientific classification tasks.
Reference72 articles.
1. Few-shot text classification with distributional signatures,2019
2. SCIBERT: a pretrained language model for scientific text,2019
3. Language models are few-shot learners,2020
4. Semi-supervised models via data augmentation for classifying interactive affective responses,2020
5. MixText: linguistically-informed interpolation of hidden space for semi-supervised text classification,2020