Affiliation:
1. Faculty of Organizational Sciences, University of Belgrade, Serbia
2. Singidunum University, Serbia
3. The Institute for Artificial Intelligence Research and Development of Serbia, Serbia
Abstract
With growing awareness of the societal impact of decision-making, fairness has become an important issue. More specifically, in many real-world situations, decision-makers can unintentionally discriminate a certain group of individuals based on either inherited or appropriated attributes, such as gender, age, race, or religion. In this paper, we introduce a post-processing technique, called fair additive weighting (FairAW) for achieving group and individual fairness in multi-criteria decision-making methods. The methodology is based on changing the score of an alternative by imposing fair criteria weights. This is achieved through minimization of differences in scores of individuals subject to fairness constraint. The proposed methodology can be successfully used in multi-criteria decision-making methods where the additive weighting is used to evaluate scores of individuals. Moreover, we tested the method both on synthetic and real-world data, and compared it to Disparate Impact Remover and FA*IR methods that are commonly used in achieving fair scoring of individuals. The obtained results showed that FairAW manages to achieve group fairness in terms of statistical parity, while also retaining individual fairness. Additionally, our approach managed to obtain the best equality in scoring between discriminated and privileged groups.
Subject
Artificial Intelligence,Computer Vision and Pattern Recognition,Theoretical Computer Science
Reference44 articles.
1. Machine bias;Angwin;ProPublica, May,2016
2. A. Asudeh, H. Jagadish, J. Stoyanovich and G. Das, Designing fair ranking schemes, in: Proceedings of the 2019 International Conference on Management of Data, 2019, pp. 1259–1276.
3. A.J. Biega, K.P. Gummadi and G. Weikum, Equity of attention: Amortizing individual fairness in rankings, in: The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, 2018, pp. 405–414.
4. R. Binns, Fairness in machine learning: Lessons from political philosophy, in: Conference on Fairness, Accountability and Transparency, 2018, pp. 149–159.
5. Perceptions of fairness and justice: The shared aims & occasional conflicts of legitimacy and moral credibility;Bowers;Wake Forest Law Review,2012