Affiliation:
1. School of Information University of California, Berkeley Berkeley California USA
2. School of Education University of California, Berkeley Berkeley California USA
Abstract
AbstractThe greater the proliferation of AI in educational contexts, the more important it becomes to ensure that AI adheres to the equity and inclusion values of an educational system or institution. Given that modern AI is based on historic datasets, mitigating historic biases with respect to protected classes (ie, fairness) is an important component of this value alignment. Although extensive research has been done on AI fairness in education, there has been a lack of guidance for practitioners, which could enhance the practical uptake of these methods. In this work, we present a practitioner‐oriented, step‐by‐step framework, based on findings from the field, to implement AI fairness techniques. We also present an empirical case study that applies this framework in the context of a grade prediction task using data from a large public university. Our novel findings from the case study and extended analyses underscore the importance of incorporating intersectionality (such as race and gender) as central equity and inclusion institution values. Moreover, our research demonstrates the effectiveness of bias mitigation techniques, like adversarial learning, in enhancing fairness, particularly for intersectional categories like race–gender and race–income.
Practitioner notesWhat is already known about this topic
AI‐powered Educational Decision Support Systems (EDSS) are increasingly used in various educational contexts, such as course selection, admissions, scholarship allocation and identifying at‐risk students.
There are known challenges with AI in education, particularly around the reinforcement of existing biases, leading to unfair outcomes.
The machine learning community has developed metrics and methods to measure and mitigate biases, which have been effectively applied to education as seen in the AI in education literature.
What this paper adds
Introduces a comprehensive technical framework for equity and inclusion, specifically for machine learning practitioners in AI education systems.
Presents a novel modification to the ABROCA fairness metric to better represent disparities among multiple subgroups within a protected class.
Empirical analysis of the effectiveness of bias‐mitigating techniques, like adversarial learning, in reducing biases in intersectional classes (eg, race–gender, race–income).
Model reporting in the form of model cards that can foster transparent communication among developers, users and stakeholders.
Implications for practice and/or policy
The fairness framework can act as a systematic guide for practitioners to design equitable and inclusive AI‐EDSS.
The fairness framework can act as a systematic guide for practitioners to make compliance with emerging AI regulations more manageable.
Stakeholders may become more involved in tailoring the fairness and equity model tuning process to align with their values.
Reference99 articles.
1. Assessing the fairness of graduation predictions;Anderson H.;Proceedings of The 12th International Conference on Educational Data Mining (EDM 2019), Montréal, Canada,2019
2. Machine Bias
*
3. The Possibility of Fairness: Revisiting the Impossibility Theorem in Practice
4. Fairness in Criminal Justice Risk Assessments: The State of the Art
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献