Abstract
AbstractIt is widely documented that higher education institutional responses to the COVID-19 pandemic accelerated not only the adoption of educational technologies, but also associated socio-technical controversies. Critically, while these cloud-based platforms are capturing huge datasets, and generating new kinds of learning analytics, there are few strongly theorised, empirically validated processes for institutions to consult their communities about the ethics of this data-intensive, increasingly algorithmically-powered infrastructure. Conceptual and empirical contributions to this challenge are made in this paper, as we focus on the under-theorised and under-investigated phase required for ethics implementation, namely, joint agreement on ethical principles. We foreground the potential of ethical co-production through Deliberative Democracy (DD), which emerged in response to the crisis in confidence in how typical democratic systems engage citizens in decision making. This is tested empirically in the context of a university-wide DD consultation, conducted under pandemic lockdown conditions, co-producing a set of ethical principles to govern Analytics/AI-enabled Educational Technology (AAI-EdTech). Evaluation of this process takes the form of interviews conducted with students, educators, and leaders. Findings highlight that this methodology facilitated a unique and structured co-production process, enabling a range of higher education stakeholders to integrate their situated knowledge through dialogue. The DD process and product cultivated commitment and trust among the participants, informing a new university AI governance policy. The concluding discussion reflects on DD as an exemplar of ethical co-production, identifying new research avenues to advance this work. To our knowledge, this is the first application of DD for AI ethics, as is its use as an organisational sensemaking process in education.
Funder
Australian Research Council
University of Technology Sydney
University of Sydney
Publisher
Springer Science and Business Media LLC
Reference60 articles.
1. Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). Trends and Trajectories for Explainable, Accountable and Intelligible Systems. Paper presented at the Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3173574.3174156
2. Baker, R. S., & Hawn, A. (2021). Algorithmic bias in education. International Journal of Artificial Intelligence in Education. https://doi.org/10.1007/s40593-021-00285-9
3. Bandola-Gill, J., Arthur, M., & Ivor Leng, R. (2022). What is co-production? Conceptualising and understanding co-production of knowledge and policy across different theoretical perspectives. Evidence & Policy, 1–24. https://doi.org/10.1332/174426421x16420955772641
4. Barratt-See, G., Cheng, M., Deakin Crick, R., & Buckingham Shum, S. (2017). Assessing resilient agency with CLARA: Empirical findings from piloting a visual analytics tool at UTS. Paper presented at the proceedings UniSTARS 2017: University students, transitions, achievement, retention & success, Adelaide, 1–4 July, 2017.
5. Boyd, D., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679. https://doi.org/10.1080/1369118X.2012.678878