Effects of academic achievement and group composition on the quality of student-generated questions and online procedural prompt usage patterns

Author:

Yu Fu-Yun,Cheng Wen-Wen

Abstract

AbstractThis study aims to examine if and how academic achievement and gender group composition affect the quality of online SGQ and the use patterns of procedural prompts provided to support SGQ activities. Forty-one university sophomores enrolled in an English as a foreign language class participated in a four-week study. All questions generated were categorized based on the revised Bloom’s taxonomy for quality evaluation, and a content analysis along the set of integrated online procedural prompts was conducted to reveal usage patterns. Five key findings were obtained: First, the provision of the online procedural prompts served as an efficacious learning scaffold to help the participants at both high- and low-academic achievement levels generate the most questions at high-cognitive levels. Second, based on the results of the Fisher’s exact test, no significant relationships were found between academic achievement and the quality of SGQ. Third, the participants in the all-male and mixed-gender groups generated the majority of their questions at high-cognitive levels, whereas the all-female group generated an equal number of questions at both low- and high-cognitive levels. Fourth, no significant relationships between the gender group composition and the quality of SGQ were found according to the chi-square test of independence. Fifth, the results of the content analysis revealed that while some same usage patterns related to online procedural prompts were exhibited by students at both low- and high-academic achievement levels and with different gender group compositions, slightly different usage patterns were observed.

Funder

ministry of science and technology, taiwan

Ministry of Science and Technology, Taiwan

Publisher

Springer Science and Business Media LLC

Subject

Management of Technology and Innovation,Media Technology,Education,Social Psychology

Cited by 2 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. A Comparative Analysis of Different Large Language Models in Evaluating Student-Generated Questions;2024 13th International Conference on Educational and Information Technology (ICEIT);2024-03-22

2. Can Autograding of Student-Generated Questions Quality by ChatGPT Match Human Experts?;IEEE Transactions on Learning Technologies;2024

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3