Putting Fairness Principles into Practice
Author:
Affiliation:
1. Google, New York, NY, USA
2. Google, Mountain View, CA, USA
3. Google, San Bruno, CA, USA
4. Google, Seattle, WA, USA
Publisher
ACM
Link
https://dl.acm.org/doi/pdf/10.1145/3306618.3314234
Reference38 articles.
1. Alekh Agarwal Alina Beygelzimer Miroslav Dud'ik John Langford and Hanna M. Wallach. 2018. A Reductions Approach to Fair Classification. CoRR Vol. abs/1803.02453 (2018). arxiv: 1803.02453 http://arxiv.org/abs/1803.02453 Alekh Agarwal Alina Beygelzimer Miroslav Dud'ik John Langford and Hanna M. Wallach. 2018. A Reductions Approach to Fair Classification. CoRR Vol. abs/1803.02453 (2018). arxiv: 1803.02453 http://arxiv.org/abs/1803.02453
2. Hana Ajakan Pascal Germain Hugo Larochelle Francc ois Laviolette and Mario Marchand. 2014. Domain-Adversarial Neural Networks. CoRR Vol. abs/1412.4446 (2014). arxiv: 1412.4446 http://arxiv.org/abs/1412.4446 Hana Ajakan Pascal Germain Hugo Larochelle Francc ois Laviolette and Mario Marchand. 2014. Domain-Adversarial Neural Networks. CoRR Vol. abs/1412.4446 (2014). arxiv: 1412.4446 http://arxiv.org/abs/1412.4446
3. Y Bechavod and K Ligett. 2017. Penalizing unfairness in binary classification. arXiv preprint arXiv:1707.00044 (2017). Y Bechavod and K Ligett. 2017. Penalizing unfairness in binary classification. arXiv preprint arXiv:1707.00044 (2017).
4. Alex Beutel Jilin Chen Zhe Zhao and Ed H. Chi. 2017a. Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations. CoRR Vol. abs/1707.00075 (2017). arxiv: 1707.00075 http://arxiv.org/abs/1707.00075 Alex Beutel Jilin Chen Zhe Zhao and Ed H. Chi. 2017a. Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations. CoRR Vol. abs/1707.00075 (2017). arxiv: 1707.00075 http://arxiv.org/abs/1707.00075
5. Beyond Globally Optimal
Cited by 71 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Developing equity-aware safety performance functions for identifying hotspots of pedestrian-involved crashes;Accident Analysis & Prevention;2024-11
2. The Fairness Stitch: A Novel Approach for Neural Network Debiasing;Acta Informatica Pragensia;2024-08-22
3. Addressing bias in bagging and boosting regression models;Scientific Reports;2024-08-08
4. Enhancing Algorithmic Fairness: Integrative Approaches and Multi-Objective Optimization Application in Recidivism Models;Proceedings of the 19th International Conference on Availability, Reliability and Security;2024-07-30
5. A survey on popularity bias in recommender systems;User Modeling and User-Adapted Interaction;2024-07-01
1.学者识别学者识别
2.学术分析学术分析
3.人才评估人才评估
"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370
www.globalauthorid.com
TOP
Copyright © 2019-2024 北京同舟云网络信息技术有限公司 京公网安备11010802033243号 京ICP备18003416号-3