Author:
Burstein Jill,Tetreault Joel,Chodorow Martin
Abstract
This paper reviews annotation schemes used for labeling discourse coherence in well-formed and noisy (essay) data, and it describes a system that we have developed for automated holistic scoring of essay coherence. We review previous, related work on unsupervised computational approaches to evaluating discourse coherence and focus on a taxonomy of discourse coherence schemes classified by their different goals and types of data. We illustrate how a holistic approach can be successfully used to build systems for noisy essay data, across domains and populations. We discuss the model features related to human scoring guide criteria for essay scoring, and the importance of using model features relevant to these criteria for the purpose of generating meaningful scores and feedback for students and test-takers. To demonstrate the effectiveness of a holistic annotation scheme, we present results of system evaluations.
Publisher
University of Illinois Libraries
Subject
Linguistics and Language,Communication,Language and Linguistics
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献