Affiliation:
1. Arizona State University, Tempe, AZ85281USA
Abstract
Teaching students the concepts behind computational thinking is a difficult task, often gated by the inherent difficulty of programming languages. In the classroom, teaching assistants may be required to interact with students to help them learn the material. Time spent in grading and offering feedback on assignments removes from this time to help students directly. As such, we offer a framework for developing an explainable Artificial Intelligence that performs automated analysis of student code while offering feedback and partial credit. The creation of this system is dependent on three core components. Those components are a knowledge base, a set of conditions to be analyzed, and a formal set of inference rules. In this paper, we develop such a system for our own language by employing Pi-Calculus and Hoare Logic. Our detailed system can also perform self-learning of rules. Given solution files, the system is able to extract the important aspects of the program and develop feedback that explicitly details the errors students make when they veer away from these aspects. The level of detail and expected precision can be easily modified through parameter tuning and variety in sample solutions.
Publisher
Intelligence Science and Technology Press Inc.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献