Author:
Wang Junmei,Huang Jimmy X.,Sheng Jinhua
Abstract
AbstractAlthough the short-text retrieval model by BERT achieves significant performance improvement, research on the efficiency and performance of long-text retrieval still faces challenges. Therefore, this study proposes an efficient long-text retrieval model based on BERT (called LTR-BERT). This model achieves speed improvement while retaining most of the long-text retrieval performance. In particular, The LTR-BERT model is trained by using the relevance between short texts. Then, the long text is segmented and stored off-line. In the retrieval stage, only the coding of the query and the matching scores are calculated, which speeds up the retrieval. Moreover, a query expansion strategy is designed to enhance the representation of the original query and reserve the encoding region for the query. It is beneficial for learning missing information in the representation stage. The interaction mechanism without training parameters takes into account the local semantic details and the whole relevance to ensure the accuracy of retrieval and further shorten the response time. Experiments are carried out on MS MARCO Document Ranking dataset, which is specially designed for long-text retrieval. Compared with the interaction-focused semantic matching method by BERT-CLS, the MRR@10 values of the proposed LTR-BERT method are increased by 2.74%. Moreover, the number of documents processed per millisecond increased by 333 times.
Funder
Natural Science Foundation of Zhejiang Province
National Natural Science Foundation of China
Publisher
Springer Science and Business Media LLC
Subject
Computational Mathematics,Engineering (miscellaneous),Information Systems,Artificial Intelligence
Reference42 articles.
1. Peters ME, Neumann M, Iyyer M, Gardner M, Clark C, Lee K, Zettlemoyer L (2018) Deep contextualized word representations. In: Proc. 16th conf. North Am. chapter assoc. comput. linguist., pp 2227–2237. http://arxiv.org/abs/1802.05365
2. Devlin J, Chang M-W, Lee K, Toutanova K (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In: Proc. 17th conf. North Am. chapter assoc. comput. linguist. hum. lang. technol., Minneapolis, USA, pp 4171–4186. http://arxiv.org/abs/1810.04805
3. Liu C, Zhu W, Zhang X, Zhai Q (2023) Sentence part-enhanced BERT with respect to downstream tasks. Complex Intell Syst 9:463–474. https://doi.org/10.1007/s40747-022-00819-1
4. Wang Y, Rong W, Zhang J, Zhou S, Xiong Z (2020) Multi-turn dialogue-oriented pretrained question generation model. Complex Intell Syst 6:493–505. https://doi.org/10.1007/s40747-020-00147-2
5. Dai Z, Callan J (2019) Deeper text understanding for IR with contextual neural language modeling. In: Proc. 42nd int. ACM SIGIR conf. res. dev. inf. Retrieval (SIGIR’19), pp 985–988. https://doi.org/10.1145/3331184.3331303