Fine-tuning language models to recognize semantic relations
-
Published:2023-07-23
Issue:4
Volume:57
Page:1463-1486
-
ISSN:1574-020X
-
Container-title:Language Resources and Evaluation
-
language:en
-
Short-container-title:Lang Resources & Evaluation
Author:
Roussinov DmitriORCID, Sharoff Serge, Puchnina Nadezhda
Abstract
AbstractTransformer-based pre-trained Language Models (PLMs) have emerged as the foundations for the current state-of-the-art algorithms in most natural language processing tasks, in particular when applied to context rich data such as sentences or paragraphs. However, their impact on the tasks defined in terms of abstract individual word properties, not necessary tied to their specific use in a particular sentence, has been inadequately explored, which is a notable research gap. Addressing this gap is crucial for advancing our understanding of natural language processing. To fill this void, we concentrate on classification of semantic relations: given a pair of concepts (words or word sequences) the aim is to identify the semantic label to describe their relationship. E.g. in the case of the pair green/colour, “is a” is a suitable relation while “part of”, “property of”, and “opposite of” are not suitable. This classification is independent of a particular sentence in which these concepts might have been used. We are first to incorporate a language model into both existing approaches to this task, namely path-based and distribution-based methods. Our transformer-based approaches exhibit significant improvements over the state-of-the-art and come remarkably close to achieving human-level performance on rigorous benchmarks. We are also first to provide evidence that the standard datasets over-state the performance due to the effect of “lexical memorisation.” We reduce this effect by applying lexical separation. On the new benchmark datasets, the algorithmic performance remains significantly below human-level, highlighting that the task of semantic relation classification is still unresolved, particularly for language models of the sizes commonly used at the time of our study. We also identify additional challenges that PLM-based approaches face and conduct extensive ablation studies and other experiments to investigate the sensitivity of our findings to specific modelling and implementation choices. Furthermore, we examine the specific relations that pose greater challenges and discuss the trade-offs between accuracy and processing time.
Publisher
Springer Science and Business Media LLC
Subject
Library and Information Sciences,Linguistics and Language,Education,Language and Linguistics
Reference46 articles.
1. Aghajanyan, A., Maillard, J., Shrivastava, A., Diedrick, K., Haeger, M., Li, H., Mehdad, Y., Stoyanov, V., Kumar, A., Lewis, M., & Gupta, S. (2020). Conversational semantic parsing. https://arxiv.org/abs/2009.13655 2. Barkan, O., Caciularu, A., & Dagan, I. (2020). Within-between lexical relation classification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing 3. Baroni, M., & Lenci, A. (2011). How we BLESSed distributional semantic evaluation. In Proceedings of 2011 workshop on geometrical models of natural language semantics 4. Brunner, G., Liu, Y., Pascual, D., Richter, O., Ciaramita, M., & Wattenhofer, R. (2020). On identifiability in transformers. In Inetrnational Conference on Learning Rerpresentations 5. Cho, K., van Merrienboer, B., Gulcehre, C., Bougares, F., & Y Bengio, HS. (2014). Learning phrase representations using rnn encoder-decoder for statistical machine translation. In 55th Annual Meeting of the Association for Computational Linguistics
|
|