Affiliation:
1. Financial Conduct Authority, UK
Abstract
Abstract
The use of machine learning as an input into decision-making is on the rise, owing to its ability to uncover hidden patterns in large data and improve prediction accuracy. Questions have been raised, however, about the potential distributional impacts of these technologies, with one concern being that they may perpetuate or even amplify human biases from the past. Exploiting detailed credit file data for 800,000 UK borrowers, we simulate a switch from a traditional (logit) credit scoring model to ensemble machine-learning methods. We confirm that machine-learning models are more accurate overall. We also find that they do as well as the simpler traditional model on relevant fairness criteria, where these criteria pertain to overall accuracy and error rates for population subgroups defined along protected or sensitive lines (gender, race, health status, and deprivation). We do observe some differences in the way credit-scoring models perform for different subgroups, but these manifest under a traditional modelling approach and switching to machine learning neither exacerbates nor eliminates these issues. The paper discusses some of the mechanical and data factors that may contribute to statistical fairness issues in the context of credit scoring.
Publisher
Oxford University Press (OUP)
Subject
Management, Monitoring, Policy and Law,Economics and Econometrics
Reference26 articles.
1. ‘Big Data’s Disparate Impact’;Barocas;104 California Law Review 671,2016
2. ‘Consumer-lending Discrimination in the Fintech Era’;Bartlett,2019
3. ‘The Impact of Covid on Machine Learning and Data Science in UK Banking’;Bholat;Bank of England Quarterly Bulletin Q4,2020
Cited by
11 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Boundary-Guided Black-Box Fairness Testing;2024 IEEE 48th Annual Computers, Software, and Applications Conference (COMPSAC);2024-07-02
2. Fairness Testing: A Comprehensive Survey and Analysis of Trends;ACM Transactions on Software Engineering and Methodology;2024-06-04
3. The Moral Psychology of Artificial Intelligence;Annual Review of Psychology;2024-01-18
4. Causality-Aided Trade-Off Analysis for Machine Learning Fairness;2023 38th IEEE/ACM International Conference on Automated Software Engineering (ASE);2023-09-11
5. Latent Imitator: Generating Natural Individual Discriminatory Instances for Black-Box Fairness Testing;Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis;2023-07-12