Affiliation:
1. University of Memphis, USA
Abstract
This chapter describes the testing of the computer-human interface of Virtual Internship Authorware (VIA), an authoring tool for creating web-based virtual internships. The authors describe several benchmark tasks that would be performed by authors who create lessons on the subject matter of land science. Performance on each task was measured by task completion times and the likelihood of completing the task. Data were collected from ten novices and three experts familiar with the broader learning environment called Intershipinator. Task completion times and the number of steps to complete the tasks were also modeled by GOMS (Goals, Operators, Methods, and Selection), a theoretical model that predicts these measures of user interaction based on a computational psychological model of computer-human interaction. The output from the GOMS simulations of task completion times and number of steps robustly predicted the performance of both novices and experts. Large deviations between model predictions and human performance are expected to guide modifications of the authoring tool.