Affiliation:
1. CAS Key Lab of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
Abstract
Recently, we have witnessed the bloom of neural ranking models in the information retrieval (IR) field. So far, much effort has been devoted to developing effective neural ranking models that can generalize well on new data. There has been less attention paid to the robustness perspective. Unlike the effectiveness, which is about the average performance of a system under normal purpose, robustness cares more about the system performance in the worst case or under malicious operations instead. When a new technique enters into the real-world application, it is critical to know not only how it works in average, but also how would it behave in abnormal situations. So, we raise the question in this work: Are neural ranking models robust? To answer this question, first, we need to clarify what we refer to when we talk about the robustness of ranking models in IR. We show that robustness is actually a multi-dimensional concept and there are three ways to define it in IR: (1) the
performance variance
under the independent and identically distributed (I.I.D.) setting; (2) the
out-of-distribution (OOD) generalizability
; and (3) the
defensive ability
against adversarial operations. The latter two definitions can be further specified into two different perspectives, respectively, leading to five robustness tasks in total. Based on this taxonomy, we build corresponding benchmark datasets, design empirical experiments, and systematically analyze the robustness of several representative neural ranking models against traditional probabilistic ranking models and learning-to-rank (LTR) models. The empirical results show that there is no simple answer to our question. While neural ranking models are less robust against other IR models in most cases, some of them can still win two out of five tasks. This is the first comprehensive study on the robustness of neural ranking models. We believe the way we study the robustness as well as our findings would be beneficial to the IR community. We will also release all the data and codes to facilitate the future research in this direction.
Funder
National Natural Science Foundation of China
Youth Innovation Promotion Association CAS
Lenovo-CAS Joint Lab Youth Scientist Project
Foundation and Frontier Research Key Program of Chongqing Science and Technology Commission
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Science Applications,General Business, Management and Accounting,Information Systems
Reference104 articles.
1. DocBERT: BERT for document classification;Adhikari Ashutosh;arXiv preprint arXiv:1904.08398,2019
2. Learning a spelling error model from search query logs
3. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
4. Generating Natural Language Adversarial Examples
5. Andrei Broder. 2002. A taxonomy of web search. In ACM Sigir Forum, Vol. 36. ACM New York, NY, 3–10.
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Ranking-Incentivized Document Manipulations for Multiple Queries;Proceedings of the 2024 ACM SIGIR International Conference on Theory of Information Retrieval;2024-08-02
2. Black-box Adversarial Attacks against Dense Retrieval Models: A Multi-view Contrastive Learning Method;Proceedings of the 32nd ACM International Conference on Information and Knowledge Management;2023-10-21
3. The Editor and the Algorithm: Recommendation Technology in Online News;Management Science;2023-10-17
4. Towards Robust Neural Rankers with Large Language Model: A Contrastive Training Approach;Applied Sciences;2023-09-08
5. Topic-oriented Adversarial Attacks against Black-box Neural Ranking Models;Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval;2023-07-18