Modelling and Explaining IR System Performance Towards Predictive Evaluation

Author:

Faggioli Guglielmo1

Affiliation:

1. University of Padua, Italy

Abstract

Information Retrieval (IR) systems play a fundamental role in many modern commodities, including search engines (SEs), digital libraries, recommender systems, and social networks. The IR task is particularly challenging because of the volatility of IR systems performance: users' information needs change daily, and so do the documents to be retrieved and the concept of what is relevant to a given information need. Therefore, the empirical offline evaluation of an IR system is a costly and slow post-hoc procedure, that happens after the system deployment. Given the challenges linked to empirical IR evaluation, predicting a system's performance before its deployment would add significant value to the development of an IR system. In this manuscript, we place the cornerstone for the prediction of IR performance, by considering two closely related areas: the modeling of IR systems performance and the Query Performance Prediction (QPP). The former area allows us to identify those features that impact the most on the performance and that can be used as predictors, while the latter provides us with a starting point to instantiate the predictive task in IR. Concerning the modeling of IR performance, we first investigate one of the most popular statistical tools, ANOVA. In particular, we compare traditional ANOVA with a recent approach, bootstrap ANOVA, and observe the different conclusions that can be achieved using these two different statistical tools [Faggioli and Ferro, 2021]. Secondly, using ANOVA, we study the concept of topic difficulty and observe that the topic difficulty is not an intrinsic property of the information need but stems from the formulation used to represent the topic [Culpepper et al., 2022]. Finally, we show how to use Generalized Linear Models (GLMs) as an alternative to the traditional linear modeling of IR performance [Faggioli et al., 2022]. We show how GLMs provide more powerful inference with comparable stability. Our analyses on the QPP domain start with developing a predictor used to select among a set of reformulations for the same information need, the best-performing one for the systematic review task [Di Nunzio and Faggioli, 2021]. Secondly, we investigate how to classify queries as either semantic or lexical to predict whether neural models will perform better than lexical ones. Finally, given the challenges shown in the evaluation of the previous approaches, we devise a new evaluation procedure, dubbed sMARE [Faggioli et al., 2021]. sMARE allows moving from a single point estimation of the performance to a distributional one, allowing us to achieve improved comparisons between QPP models and more precise analyses. Awarded by: University of Padova, Padova, Italy on 20 March 2023. Supervised by: Nicola Ferro. Available at: https://www.research.unipd.it/handle/11577/3472979.

Publisher

Association for Computing Machinery (ACM)

Subject

Hardware and Architecture,Management Information Systems

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3