Abstract
Peer assessment is a strategy wherein students evaluate the level, value, or quality of their peers' work within the same educational setting. Research has demonstrated that peer evaluation processes positively impact skill development and academic performance. By applying evaluation criteria to their peers' work and offering comments, corrections, and suggestions for improvement, students not only enhance their own work but also cultivate critical thinking skills. To effectively nurture students' role as evaluators, deliberate and structured opportunities for practice, along with training and guidance, are essential.
Artificial Intelligence (AI) can offer a means to assess peer evaluations automatically, ensuring their quality and assisting students in executing assessments with precision. This approach allows educators to focus on evaluating student productions without necessitating specialized training in feedback evaluation.
This paper presents the process developed to automate the assessment of feedback quality. Through the utilization of feedback fragments evaluated by researchers based on pre-established criteria, an Artificial Intelligence (AI) Large Language Model (LM) was trained to achieve automated evaluation. The findings show the similarity between human evaluation and automated evaluation, which allows expectations to be generated regarding the possibilities of AI for this purpose. The challenges and prospects of this process are discussed, along with recommendations for
optimizing results.
Artificial intelligence can offer a means to assess peer evaluations automatically, ensuring their quality and assisting students in executing assessments with precision. This approach allows educators to focus on evaluating student productions without necessitating specialized training in feedback evaluation.
This paper presents the process developed to automate the assessment of feedback quality. Through the utilization of feedback fragments evaluated by researchers based on pre-established criteria, an artificial intelligence Large Language Model was trained to achieve automated evaluation. The challenges and prospects of this process are discussed, along with recommendations for optimizing results.
Publisher
Edicions de la Universitat de Barcelona