Abstract
Automatically assessing code for learning purposes is a challenging goal to achieve. On-site courses and online ones developed for distance learning both require automated ways to grade learners’ programs to be able to scale and manage a large public with a limited teaching staff. This paper reviews recent automated code assessment systems. It proposes a systematic review of the possible analyses they can perform with the associated techniques, the kinds of produced feedback and the ways they are integrated in the learning process. It then discusses the key challenges for the development of new automated code assessment systems and the interaction with human grading. In conclusion, the paper draws several recommendations for new research directions and for possible improvements for automatic code assessment.
Cited by
26 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献