Author:
Cremonesi Paolo,Jannach Dietmar
Abstract
Scholars in algorithmic recommender systems research have developed a largely standardized scientific method, where progress is claimed by showing that a new algorithm outperforms existing ones on or more accuracy measures. In theory, reproducing and thereby verifying such improvements is easy, as it merely involves the execution of the experiment code on the same data. However, as recent work shows, the reported progress is often only virtual, because of a number of issues related to (i) a lack of reproducibility, (ii) technical and theoretical flaws, and (iii) scholarship practices that are strongly prone to researcher biases. As a result, several recent works could show that the latest published algorithms actually do not outperform existing methods when evaluated independently. Despite these issues, we currently see no signs of a crisis, where researchers re-think their scientific method, but rather a situation of stagnation, where researchers continue to focus on the same topics. In this paper, we discuss these issues, analyze their potential underlying reasons, and outline a set of guidelines to ensure progress in recommender systems research.
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Effective music skip prediction based on late fusion architecture for user-interaction noise;Expert Systems with Applications;2024-03
2. Candidate Set Sampling for Evaluating Top-N Recommendation;2023 IEEE International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT);2023-10-26
3. Dataset versus reality: Understanding model performance from the perspective of information need;Journal of the Association for Information Science and Technology;2023-08-18
4. Take a Fresh Look at Recommender Systems from an Evaluation Standpoint;Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval;2023-07-18
5. When Newer is Not Better: Does Deep Learning Really Benefit Recommendation From Implicit Feedback?;Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval;2023-07-18