Affiliation:
1. Virginia Polytechnic Institute and State University Blacksburg, VA 24061 USA
Abstract
Because there are various measures for comparing methods, researchers in human-computer interaction generally find it difficult to make solid conclusions about a particular method. We consider some candidate measures (e.g., thoroughness, validity, reliability) of effectiveness and then provide a summary of studies that have compared usability evaluation methods (UEMs) using one or more of these measures. We find that studies do not always provide the appropriate descriptive statistics to make solid conclusions, especially in terms of validity. In addition, studies do not always compare UEMs to a standard yardstick such as end-user testing to establish an appropriate validity score. Finally, we provide some possible ways to address criterion deficiency and criterion contamination; two important considerations for researchers attempting to optimize the balance between ultimate and actual criteria.
Subject
General Medicine,General Chemistry
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Game Engine Solutions;Simulation and Gaming;2018-02-14
2. Usability Evaluation of CADCAM: State of the Art;Procedia CIRP;2015
3. Evaluation in human–computer interaction;Evaluation of Human Work, 3rd Edition;2005-04-04
4. Bibliography;Designing Usable Electronic Text;2004-01-14