Author:
von Winckelmann Stacey Lynn
Abstract
Purpose
This study aims to explore the perception of algorithm accuracy among data professionals in higher education.
Design/methodology/approach
Social justice theory guided the qualitative descriptive study and emphasized four principles: access, participation, equity and human rights. Data collection included eight online open-ended questionnaires and six semi-structured interviews. Participants included higher education professionals who have worked with predictive algorithm (PA) recommendations programmed with student data.
Findings
Participants are aware of systemic and racial bias in their PA inputs and outputs and acknowledge their responsibility to ethically use PA recommendations with students in historically underrepresented groups (HUGs). For some participants, examining these topics through the lens of social justice was a new experience, which caused them to look at PAs in new ways.
Research limitations/implications
Small sample size is a limitation of the study. Implications for practice include increased stakeholder training, creating an ethical data strategy that protects students, incorporating adverse childhood experiences data with algorithm recommendations, and applying a modified critical race theory framework to algorithm outputs.
Originality/value
The study explored the perception of algorithm accuracy among data professionals in higher education. Examining this topic through a social justice lens contributes to limited research in the field. It also presents implications for addressing racial bias when using PAs with students in HUGs.
Subject
Library and Information Sciences,Computer Science Applications,Education
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献