Affiliation:
1. Department of Educational Research Methodology UNC Greensboro North Carolina USA
Abstract
AbstractThe advent of generative AI such as ChatGPT has propelled the field of evaluation into conversations about the use of AI in the field and the ethics of knowledge generation. While there are many benefits of AI, as with any new technology there can be collateral damage. The discourse about AI and evaluation provides another opportunity to center equity in our work as evaluators by asking, how can evaluation contribute to the public good in an AI world? This article highlights contextual concerns with AI from an ecosystem perspective, placing emphasis on structural and racial/ethnic inequities, bias, and prejudice. The author issues a clarion call for the field of evaluation to act collectively to incite change by being proactive, embracing our professional responsibility and critical voice, and employing evidence‐based practice. Evaluators are encouraged to exercise our social and political responsibility through courageous leadership and advocacy to attend to the values of stakeholders and advance an equitable AI world.
Subject
Management Science and Operations Research,Strategy and Management,Education
Reference40 articles.
1. American Evaluation Association. (2018a).AEA evaluator competencies report. Retrieved fromhttps://www.eval.org/About/Competencies‐Standards
2. American Evaluation Association. (2018b).Guiding principles for evaluators. Retrieved fromhttps://www.eval.org/About/Guiding‐Principles
3. Lessons learned using a values-engaged approach to attend to culture, diversity, and equity in a STEM program evaluation
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献