Author:
Dong Peng,Loh Marie,Mondry Adrian
Abstract
Abstract
Background
Relevance assessment is a major problem in the evaluation of information retrieval systems. The work presented here introduces a new parameter, "Relevance Similarity", for the measurement of the variation of relevance assessment. In a situation where individual assessment can be compared with a gold standard, this parameter is used to study the effect of such variation on the performance of a medical information retrieval system. In such a setting, Relevance Similarity is the ratio of assessors who rank a given document same as the gold standard over the total number of assessors in the group.
Methods
The study was carried out on a collection of Critically Appraised Topics (CATs). Twelve volunteers were divided into two groups of people according to their domain knowledge. They assessed the relevance of retrieved topics obtained by querying a meta-search engine with ten keywords related to medical science. Their assessments were compared to the gold standard assessment, and Relevance Similarities were calculated as the ratio of positive concordance with the gold standard for each topic.
Results
The similarity comparison among groups showed that a higher degree of agreements exists among evaluators with more subject knowledge. The performance of the retrieval system was not significantly different as a result of the variations in relevance assessment in this particular query set.
Conclusion
In assessment situations where evaluators can be compared to a gold standard, Relevance Similarity provides an alternative evaluation technique to the commonly used kappa scores, which may give paradoxically low scores in highly biased situations such as document repositories containing large quantities of relevant data.
Publisher
Springer Science and Business Media LLC
Subject
Library and Information Sciences
Reference31 articles.
1. Peterson G, Aslani P, Williams KA: How do consumers search for and appraise information on medicines on the Internet? A qualitative study using focus groups. J Med Internet Res. 2003, 5: e33-10.2196/jmir.5.4.e33.
2. Abbott KC, Oliver DK, Boal TR, Gadiyak G, Boocks C, Yuan CM, Welch PG, Poropatich RK: International use of an academic nephrology World Wide Web site: from medical information resource to business tool. Mil Med. 2002, 167: 326-330.
3. The ultimate search engine?. Nat Cell Biol. 2005, 7: 1-
4. Curro V, Buonuomo PS, Onesimo R, de Rose P, Vituzzi A, di Tanna GL, D'Atri A: A quality evaluation methodology of health web-pages for non-professionals. Med Inform Internet Med. 2004, 29: 95-107. 10.1080/14639230410001684396.
5. Sackett DL, Straus SE, Richardson WS, Rosenberg W, Haynes RB: Evidence-Based Medicine: How to practice and teach EBM. 2000, London, Churchill Livingstone
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献