Affiliation:
1. Southwest University (China)
2. Macquarie University
Abstract
Over the past decade, interpreter certification performance testing has gained momentum. Certification tests often involve high
stakes, since they can play an important role in regulating access to professional practice and serve to provide a measure of
professional competence for end users. The decision to award certification is based on inferences from candidates’ test scores
about their knowledge, skills and abilities, as well as their interpreting performance in a given target domain. To justify the
appropriateness of score-based inferences and actions, test developers need to provide evidence that the test is valid and
reliable through a process of test validation. However, there is little evidence that test qualities are systematically evaluated
in interpreter certification testing. In an attempt to address this problem, this paper proposes a theoretical argument-based
validation framework for interpreter certification performance tests so as to guide testers in carrying out systematic validation
research. Before presenting the framework, validity theory is reviewed, and an examination of the argument-based approach to
validation is provided. A validity argument for interpreter tests is then proposed, with hypothesized validity evidence. Examples
of evidence are drawn from relevant empirical work, where available. Gaps in the available evidence are highlighted and
suggestions for research are made.
Publisher
John Benjamins Publishing Company
Subject
Linguistics and Language,Language and Linguistics
Cited by
46 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献