Affiliation:
1. Department of Computer Science, College of Computer, Qassim University, Buraydah, Saudi Arabia
2. Department of Computer Science, University of Liverpool, Liverpool, UK
Abstract
Word embedding models have recently shown some capability to encode hierarchical information that exists in textual data. However, such models do not explicitly encode the hierarchical structure that exists among words. In this work, we propose a method to learn hierarchical word embeddings (HWEs) in a specific order to encode the hierarchical information of a knowledge base (KB) in a vector space. To learn the word embeddings, our proposed method considers not only the hypernym relations that exist between words in a KB but also contextual information in a text corpus. The experimental results on various applications, such as supervised and unsupervised hypernymy detection, graded lexical entailment prediction, hierarchical path prediction, and word reconstruction tasks, show the ability of the proposed method to encode the hierarchy. Moreover, the proposed method outperforms previously proposed methods for learning nonspecialised, hypernym-specific, and hierarchical word embeddings on multiple benchmarks.
Subject
Applied Mathematics,General Immunology and Microbiology,General Biochemistry, Genetics and Molecular Biology,Modelling and Simulation,General Medicine
Reference43 articles.
1. Question classification using head words and their hypernyms;Z. Huang
2. A graph-based algorithm for inducing lexical taxonomies from scratch;R. Navigli
3. Recognizing Textual Entailment: Models and Applications
4. Classifying taxonomic relations between pairs of wikipedia articles;O. Biran
5. Don’t count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors;M. Baroni
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献