Abstract
PurposeThe purpose of this study is to compare the evaluation of search result lists and documents, in particular evaluation criteria, elements, association between criteria and elements, pre/post and evaluation activities, and the time spent on evaluation.Design/methodology/approachThe study analyzed the data collected from 31 general users through prequestionnaires, think aloud protocols and logs, and post questionnaires. Types of evaluation criteria, elements, associations between criteria and elements, evaluation activities and their associated pre/post activities, and time were analyzed based on open coding.FindingsThe study identifies the similarities and differences of list and document evaluation by analyzing 21 evaluation criteria applied, 13 evaluation elements examined, pre/post and evaluation activities performed and time spent. In addition, the authors also explored the time spent in evaluating lists and documents for different types of tasks.Research limitations/implicationsThis study helps researchers understand the nature of list and document evaluation. Additionally, this study connects elements that participants examined to criteria they applied, and further reveals problems associated with the lack of integration between list and document evaluation. The findings of this study suggest more elements, especially at list level, be available to support users applying their evaluation criteria. Integration of list and document evaluation and integration of pre, evaluation and post evaluation activities for the interface design is the absolute solution for effective evaluation.Originality/valueThis study fills a gap in current research in relation to the comparison of list and document evaluation.
Subject
Library and Information Sciences,Information Systems
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献