Algorithmic discrimination in the credit domain: what do we know about it?

Author:

Garcia Ana Cristina Bicharra,Garcia Marcio Gomes Pinto,Rigobon Roberto

Abstract

AbstractThe widespread usage of machine learning systems and econometric methods in the credit domain has transformed the decision-making process for evaluating loan applications. Automated analysis of credit applications diminishes the subjectivity of the decision-making process. On the other hand, since machine learning is based on past decisions recorded in the financial institutions’ datasets, the process very often consolidates existing bias and prejudice against groups defined by race, sex, sexual orientation, and other attributes. Therefore, the interest in identifying, preventing, and mitigating algorithmic discrimination has grown exponentially in many areas, such as Computer Science, Economics, Law, and Social Science. We conducted a comprehensive systematic literature review to understand (1) the research settings, including the discrimination theory foundation, the legal framework, and the applicable fairness metric; (2) the addressed issues and solutions; and (3) the open challenges for potential future research. We explored five sources: ACM Digital Library, Google Scholar, IEEE Digital Library, Springer Link, and Scopus. Following inclusion and exclusion criteria, we selected 78 papers written in English and published between 2017 and 2022. According to the meta-analysis of this literature survey, algorithmic discrimination has been addressed mainly by looking at the CS, Law, and Economics perspectives. There has been great interest in this topic in the financial area, especially the discrimination in providing access to the mortgage market and differential treatment (different fees, number of parcels, and interest rates). Most attention has been devoted to the potential discrimination due to bias in the dataset. Researchers are still only dealing with direct discrimination, addressed by algorithmic fairness, while indirect discrimination (structural discrimination) has not received the same attention.

Funder

Massachusetts Institute of Technology

Publisher

Springer Science and Business Media LLC

Subject

Artificial Intelligence,Human-Computer Interaction,Philosophy

Reference145 articles.

1. Act CR (1978) PART 1607: Uniform Guidelines on Employee Selection Procedures (1978). https://www.govinfo.gov/content/pkg/CFR-2011-title29-vol4/xml/CFR-2011-title29-vol4-part1607.xml

2. Aitken R (2017) All data is credit data: Constituting the unbanked. Competition & Change 21(4):274–300

3. Albach M, Wright JR (2021) The role of accuracy in algorithmic process fairness across multiple domains. In: Proceedings of the 22nd ACM Conference on Economics and Computation, pp 29–49

4. Alesina AF, Lotti F, Mistrulli PE (2013) Do women pay more for credit? Evidence from Italy. J Eur Econ Assoc 11(suppl–1):45–66

5. Allen JA (2019) The color of algorithms: An analysis and proposed research agenda for deterring algorithmic redlining. Fordham Urb. LJ 46:219

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. 1: Introduction;Ethics of Socially Disruptive Technologies;2023-09-05

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3