Affiliation:
1. The Pennsylvania State University
2. The College of New Jersey
Abstract
This proof-of-concept investigation describes a computer-based approach for deriving the knowledge structure of individuals and of groups from their written essays, and considers the convergent criterion-related validity of the computer-based scores relative to human rater essay scores and multiple-choice test scores. After completing a classroom-based course, undergraduate participants in a sophomore-level management course completed a 100-item multiple-choice final examination and then answered an extended-response essay question comparing four management theories. The essays were quantified with ALA-Reader software applying both sentence-wise and linear lexical aggregate approaches, and then analyzed with Pathfinder KNOT software. The linear aggregate approach was a better measure of essay content structure relative to the sentence-wise approach, with significant Spearman correlations of 0.60 and 0.45 with the human rater essay scores. The group network representations of low and high performing students were reasonable and straightforward to interpret, with the high group being more similar to the expert, and the low and high groups more similar to each other than to the expert. Suggestions for further research are provided.
Subject
Computer Science Applications,Education
Cited by
27 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献