Dense Text Retrieval based on Pretrained Language Models: A Survey

Author:

Zhao Wayne Xin1,Liu Jing2,Ren Ruiyang1,Wen Ji-Rong3

Affiliation:

1. [0]Gaoling School of Artificial Intelligence Renmin University of China, China

2. Baidu Inc., China

3. [0]Gaoling School of Artificial Intelligence [1]School of Information Renmin University of China, China

Abstract

Text retrieval is a long-standing research topic on information seeking, where a system is required to return relevant information resources to user’s queries in natural language. From heuristic-based retrieval methods to learning-based ranking functions, the underlying retrieval models have been continually evolved with the ever-lasting technical innovation. To design effective retrieval models, a key point lies in how to learn text representations and model the relevance matching. The recent success of pretrained language models (PLM) sheds light on developing more capable text retrieval approaches by leveraging the excellent modeling capacity of PLMs. With powerful PLMs, we can effectively learn the semantic representations of queries and texts in the latent representation space, and further construct the semantic matching function between the dense vectors for relevance modeling. Such a retrieval approach is called dense retrieval , since it employs dense vectors to represent the texts. Considering the rapid progress on dense retrieval, this survey systematically reviews the recent progress on PLM-based dense retrieval. Different from previous surveys on dense retrieval, we take a new perspective to organize the related studies by four major aspects, including architecture, training, indexing and integration, and thoroughly summarize the mainstream techniques for each aspect. We extensively collect the recent advances on this topic, and include 300+ reference papers. To support our survey, we create a website for providing useful resources, and release a code repository for dense retrieval. This survey aims to provide a comprehensive, practical reference focused on the major progress for dense text retrieval.

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Science Applications,General Business, Management and Accounting,Information Systems

Reference337 articles.

1. An information-theoretic perspective of tf–idf measures

2. Chris Alberti , Daniel Andor , Emily Pitler , Jacob Devlin , and Michael Collins . 2019 . Synthetic QA Corpora Generation with Roundtrip Consistency . In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 6168–6173 . Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin, and Michael Collins. 2019. Synthetic QA Corpora Generation with Roundtrip Consistency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 6168–6173.

3. Negar Arabzadeh , Bhaskar Mitra , and Ebrahim Bagheri . 2021 . MS MARCO Chameleons: Challenging the MS MARCO Leaderboard with Extremely Obstinate Queries . In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 4426–4435 . Negar Arabzadeh, Bhaskar Mitra, and Ebrahim Bagheri. 2021. MS MARCO Chameleons: Challenging the MS MARCO Leaderboard with Extremely Obstinate Queries. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 4426–4435.

4. Negar Arabzadeh Alexandra Vtyurina Xinyi Yan and Charles LA Clarke. 2021. Shallow pooling for sparse labels. arXiv preprint arXiv:2109.00062(2021). Negar Arabzadeh Alexandra Vtyurina Xinyi Yan and Charles LA Clarke. 2021. Shallow pooling for sparse labels. arXiv preprint arXiv:2109.00062(2021).

5. Negar Arabzadeh Xinyi Yan and Charles LA Clarke. 2021. Predicting Efficiency/Effectiveness Trade-offs for Dense vs. Sparse Retrieval Strategy Selection. arXiv preprint arXiv:2109.10739(2021). Negar Arabzadeh Xinyi Yan and Charles LA Clarke. 2021. Predicting Efficiency/Effectiveness Trade-offs for Dense vs. Sparse Retrieval Strategy Selection. arXiv preprint arXiv:2109.10739(2021).

Cited by 13 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Advancing continual lifelong learning in neural information retrieval: Definition, dataset, framework, and empirical evaluation;Information Sciences;2025-01

2. Pipeline and dataset generation for automated fact-checking in almost any language;Neural Computing and Applications;2024-08-02

3. Leveraging LLMs for Unsupervised Dense Retriever Ranking;Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval;2024-07-10

4. Embark on DenseQuest: A System for Selecting the Best Dense Retriever for a Custom Collection;Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval;2024-07-10

5. Injecting the score of the first-stage retriever as text improves BERT-based re-rankers;Discover Computing;2024-06-26

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3