Author:
Pitoura Evaggelia,Stefanidis Kostas,Koutrika Georgia
Abstract
AbstractWe increasingly depend on a variety of data-driven algorithmic systems to assist us in many aspects of life. Search engines and recommender systems among others are used as sources of information and to help us in making all sort of decisions from selecting restaurants and books, to choosing friends and careers. This has given rise to important concerns regarding the fairness of such systems. In this work, we aim at presenting a toolkit of definitions, models and methods used for ensuring fairness in rankings and recommendations. Our objectives are threefold: (a) to provide a solid framework on a novel, quickly evolving and impactful domain, (b) to present related methods and put them into perspective and (c) to highlight open challenges and research paths for future work.
Publisher
Springer Science and Business Media LLC
Subject
Hardware and Architecture,Information Systems
Reference92 articles.
1. Angwin, J., et al.: Machine bias. ProPublica (2016). https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
2. Albarghouthi, A., D’Antoni, L., Drews, S., Nori, A.V.: Fairness as a program property. CoRR abs/1610.06067 (2016). http://arxiv.org/abs/1610.06067
3. Albarghouthi, A., Vinitsky, S.: Fairness-aware programming. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* 2019, pp. 211–219. ACM (2019)
4. Amer-Yahia, S., Roy, S.B., Chawla, A., Das, G., Yu, C.: Group recommendation: semantics and efficiency. Proc. VLDB Endow. 2(1), 754–765 (2009)
5. Asudeh, A., Jagadish, H.V.: Fairly evaluating and scoring items in a data set. Proc. VLDB Endow. 13(12), 3445–3448 (2020)
Cited by
76 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献