Improved Arabic Query Expansion using Word Embedding

Author:

Al-Lahham Yaser1

Affiliation:

1. Zarqa University

Abstract

Abstract Word embedding enhances pseudo-relevance feedback query expansion (PRFQE), but training word embedding models needs a long time and is applied on large-size datasets. Moreover, training embedding models need special processing for languages with rich vocabulary and complex morphological structures, such as Arabic. This paper proposes using a representative subset of a dataset to train such models and defines the conditions of representativeness. Using a suitable subset of words to train a word embedding model is effective since it dramatically decreases the training time while preserving the retrieval efficiency. This paper shows that the subset of words that have the prefix ‘AL,’ or the AL-Definite words, represent the TREC2001/2022 dataset, and, for example, the time needed to train the SkipGram word embedding model by the AL-Definite words of this dataset becomes 10% of the time the whole dataset needs. The trained models are used to embed words for different scenarios of Arabic query expansion, and the proposed training method shows effectiveness as it outperforms the ordinary PRFQE by at least 7% Mean Average Precision (MAP) and 14.5% precision improvement at the 10th returned document (P10). Moreover, the improvement over not using the query expansion is 21.7% for MAP and 21.32% for the P10. The results show no significant differences between using different word embedding models for Arabic query expansion.

Publisher

Research Square Platform LLC

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3