Abstract
In this article we explore a problematic aspect of automated assessment of diagrams. Diagrams have partial and sometimes inconsistent semantics. Typically much of the meaning of a diagram resides in the labels; however, the choice of labeling is largely unrestricted. This means a correct solution may utilize differing yet semantically equivalent labels to the specimen solution. With human marking this problem can be easily overcome. Unfortunately with e-assessment this is challenging. We empirically explore the scale of the problem of synonyms by analyzing 160 student solutions to a UML task. From this we find that cumulative growth of synonyms only shows a limited tendency to reduce at the margin despite using a range of text processing algorithms such as stemming and auto-correction of spelling errors. This finding has significant implications for the ease in which we may develop future e-assessment systems of diagrams, in that the need for better algorithms for assessing label semantic similarity becomes inescapable.
Publisher
Association for Computing Machinery (ACM)
Reference25 articles.
1. A Survey of Automated Assessment Approaches for Programming Assignments
2. Brown G. Bull J. and Pendlebury M. 1997. Assessing Student Learning in Higher Education. Routledge. Brown G. Bull J. and Pendlebury M. 1997. Assessing Student Learning in Higher Education . Routledge.
3. A product review of WebCT
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献