Affiliation:
1. Uppsala University
2. FNRS
3. UCLouvain
4. Northern Arizona University
Abstract
Abstract
In Learner Corpus Research (LCR), a common source of errors stems from manual coding and annotation of linguistic
features. To estimate the amount of error present in a coded dataset, coefficients of inter-rater reliability are used. However, despite
the importance of reliability and internal consistency for validity and, by extension, study quality, interpretability and generalizability,
it is surprisingly uncommon for studies in the field of LCR to report on such reliability coefficients. In this Methods Report, we use a
recent collaborative research project to illustrate the pertinence of considering inter-rater reliability. In doing so, we hope to initiate
methodological discussion on instrument design, piloting and evaluation. We also suggest some ways forward to encourage increased
transparency in reporting practices.
Publisher
John Benjamins Publishing Company
Subject
Linguistics and Language,Education,Language and Linguistics
Reference39 articles.
1. Analysing EFL learner output in the MiLC project: An error it’s*, but which tag?;Andreu-Andrés,2010
2. Inter-annotator Agreement
3. A Coefficient of Agreement for Nominal Scales
4. Learner use of holistic language units in task-based synchronous computer-mediated communication;Collentine;Language Learning & Technology,2009
5. Instrument Reporting Practices in Second Language Research
Cited by
13 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献