Implementing equitable and intersectionality‐aware ML in education: A practical guide

Author:

Mangal Mudit1ORCID,Pardos Zachary A.2ORCID

Affiliation:

1. School of Information University of California, Berkeley Berkeley California USA

2. School of Education University of California, Berkeley Berkeley California USA

Abstract

AbstractThe greater the proliferation of AI in educational contexts, the more important it becomes to ensure that AI adheres to the equity and inclusion values of an educational system or institution. Given that modern AI is based on historic datasets, mitigating historic biases with respect to protected classes (ie, fairness) is an important component of this value alignment. Although extensive research has been done on AI fairness in education, there has been a lack of guidance for practitioners, which could enhance the practical uptake of these methods. In this work, we present a practitioner‐oriented, step‐by‐step framework, based on findings from the field, to implement AI fairness techniques. We also present an empirical case study that applies this framework in the context of a grade prediction task using data from a large public university. Our novel findings from the case study and extended analyses underscore the importance of incorporating intersectionality (such as race and gender) as central equity and inclusion institution values. Moreover, our research demonstrates the effectiveness of bias mitigation techniques, like adversarial learning, in enhancing fairness, particularly for intersectional categories like race–gender and race–income. Practitioner notesWhat is already known about this topic AI‐powered Educational Decision Support Systems (EDSS) are increasingly used in various educational contexts, such as course selection, admissions, scholarship allocation and identifying at‐risk students. There are known challenges with AI in education, particularly around the reinforcement of existing biases, leading to unfair outcomes. The machine learning community has developed metrics and methods to measure and mitigate biases, which have been effectively applied to education as seen in the AI in education literature. What this paper adds Introduces a comprehensive technical framework for equity and inclusion, specifically for machine learning practitioners in AI education systems. Presents a novel modification to the ABROCA fairness metric to better represent disparities among multiple subgroups within a protected class. Empirical analysis of the effectiveness of bias‐mitigating techniques, like adversarial learning, in reducing biases in intersectional classes (eg, race–gender, race–income). Model reporting in the form of model cards that can foster transparent communication among developers, users and stakeholders. Implications for practice and/or policy The fairness framework can act as a systematic guide for practitioners to design equitable and inclusive AI‐EDSS. The fairness framework can act as a systematic guide for practitioners to make compliance with emerging AI regulations more manageable. Stakeholders may become more involved in tailoring the fairness and equity model tuning process to align with their values.

Publisher

Wiley

Reference99 articles.

1. Assessing the fairness of graduation predictions;Anderson H.;Proceedings of The 12th International Conference on Educational Data Mining (EDM 2019), Montréal, Canada,2019

2. Machine Bias *

3. The Possibility of Fairness: Revisiting the Impossibility Theorem in Practice

4. Fairness in Criminal Justice Risk Assessments: The State of the Art

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3