Affiliation:
1. Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Malaysia
2. Prince Abdullah Bin Ghazi, Faculty of Information Technology, Al Balqa Applied University, Jordan
Abstract
One of the main issues associated with search engines is the query–document vocabulary mismatch problem, a long-standing problem in Information Retrieval (IR). This problem occurs when a user query does not match the content of stored documents, and it affects most search tasks. Automatic query expansion (AQE) is one of the most common approaches used to address this problem. Various AQE techniques have been proposed; these mainly involve finding synonyms or related words for the query terms. Word embedding (WE) is one of the methods that are currently receiving significant attention. Most of the existing AQE techniques focus on expanding the individual query terms rather the entire query during the expansion process, and this can lead to query drift if poor expansion terms are selected. In this article, we introduce Deep Averaging Networks (DANs), an architecture that feeds the average of the WE vectors produced by the Word2Vec toolkit for the terms in a query through several linear neural network layers. This average vector is assumed to represent the meaning of the query as a whole and can be used to find expansion terms that are relevant to the complete query. We explore the potential of DANs for AQE in Arabic document retrieval. We experiment with using DANs for AQE in the classic probabilistic BM25 model as well as for two recent expansion strategies: Embedding-Based Query Expansion approach (EQE1) and Prospect-Guided Query Expansion Strategy (V2Q). Although DANs did not improve all outcomes when used in the BM25 model, it outperformed all baselines when incorporated into the EQE1 and V2Q expansion strategies.
Funder
Universiti Kebangsaan Malaysia
Subject
Library and Information Sciences,Information Systems
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献