Abstract
AbstractRecommender systems were originally proposed for suggesting potentially relevant items to users, with the unique objective of providing accurate suggestions. These recommenders started being adopted in several domains, and were identified as generating biased results that could harm the data items being recommended. The exposure in generated rankings, for instance in a job candidate selection situation, is supposed to be fairly distributed among candidates, regardless of their sensitive attributes (gender, race, nationality, age) for promoting equal opportunities. It can happen, however, that no such sensitive information is available in the data applied for training the recommender, and in this case, there is still space for biases that can lead to unfair treatment, named Feature-Blind unfairness. In this work, we adopt Variational Autoencoders (VAE), considered as the state-of-the-art technique for Collaborative Filtering (CF) recommendations, and we present a framework for addressing fairness when having only access to information about user-item interactions. More specifically, we are interested in Position and Popularity Bias. VAE loss function combines two terms associated with accuracy and quality of representation; we introduce a new term for encouraging fairness, and demonstrate the effect of promoting fair results despite of a tolerable decrease in recommendation quality. In our best scenario, position bias is reduced by 42% despite a reduction of 26% in recall in the top 100 recommendation results, compared to the same situation without any fairness constraints.
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Hardware and Architecture,Human-Computer Interaction,Information Systems,Software
Reference29 articles.
1. Abdollahpouri H, Burke R, Mobasher B (2017) Controlling popularity bias in learning-to-rank recommendation. In: Proceedings of the Eleventh ACM Conference on Recommender Systems, RecSys 2017, Como, Italy, pp. 42–46. ACM. https://doi.org/10.1145/3109859.3109912
2. Abdollahpouri H, Burke R, Mobasher B (2019) Managing popularity bias in recommender systems with personalized re-ranking. In: Proceedings of the Thirty-Second International Florida Artificial Intelligence Research Society Conference, Sarasota, Florida, USA, pp. 413–418. AAAI Press (2019). https://aaai.org/ocs/index.php/FLAIRS/FLAIRS19/paper/view/18199
3. Bellogín A, Castells P, Cantador I (2017) Statistical biases in information retrieval metrics for recommender systems. Inf Retr J 20(6):606–634. https://doi.org/10.1007/s10791-017-9312-z
4. Bennett J, Lanning S, Netflix N (2007) The netflix prize. In: In KDD Cup and Workshop in conjunction with KDD
5. Biega AJ, Gummadi KP, Weikum G (2018) Equity of attention: amortizing individual fairness in rankings. In: The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR 2018, Ann Arbor, MI, USA, pp. 405–414. ACM. https://doi.org/10.1145/3209978.3210063
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献