Author:
Ali Muhammad Asif,Sun Yifang,Zhou Xiaoling,Wang Wei,Zhao Xiang
Abstract
Distinguishing antonyms from synonyms is a key challenge for many NLP applications focused on the lexical-semantic relation extraction. Existing solutions relying on large-scale corpora yield low performance because of huge contextual overlap of antonym and synonym pairs. We propose a novel approach entirely based on pre-trained embeddings. We hypothesize that the pre-trained embeddings comprehend a blend of lexical-semantic information and we may distill the task-specific information using Distiller, a model proposed in this paper. Later, a classifier is trained based on features constructed from the distilled sub-spaces along with some word level features to distinguish antonyms from synonyms. Experimental results show that the proposed model outperforms existing research on antonym synonym distinction in both speed and performance.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Antonymy-Synonymy Discrimination through Repelling Parasiamese Neural Networks;2023 International Joint Conference on Neural Networks (IJCNN);2023-06-18
2. A Comparative Study on Keyword Extraction and Generation of Synonyms in Natural Language Processing;2023 International Conference in Advances in Power, Signal, and Information Technology (APSIT);2023-06-09
3. Exploring Word-Sememe Graph-Centric Chinese Antonym Detection;Machine Learning and Knowledge Discovery in Databases: Research Track;2023
4. Antonymy-Synonymy Discrimination in Spanish with a Parasiamese Network;Advances in Artificial Intelligence – IBERAMIA 2022;2022
5. ПАРАДИГМАТИЧНА СИСТЕМА ЛЕКСИЧНИХ ОДИНИЦЬ (НА МАТЕРІАЛІ ПОЕТИЧНИХ ТЕКСТІВ);Наукові записки Харківського національного педагогічного університету ім. Г. С. Сковороди "Літературознавство";2022