Author:
Alonso Omar,Schenkel Ralf,Theobald Martin
Publisher
Springer Berlin Heidelberg
Reference5 articles.
1. Alonso, O., Mizzaro, S.: Can we get rid of TREC assessors? Using Mechanical Turk for relevance assessment. In: SIGIR IR Evaluation Workshop (2009)
2. Snow, R., et al.: Cheap and fast–but is it good?: evaluating non-expert annotations for natural language tasks. In: EMNLP (2008)
3. Piwowarski, B., Trotman, A., Lalmas, M.: Sound and complete relevance assessment for XML retrieval. ACM Trans. Inf. Syst. 27(1), 1–37 (2008)
4. Lecture Notes in Computer Science;N. Fuhr,2008
5. Denoyer, L., Gallinari, P.: The Wikipedia XML corpus. SIGIR Forum 40(1), 64–69 (2006)
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Repeatable and reliable semantic search evaluation;Journal of Web Semantics;2013-08
2. SRbench--a benchmark for soundtrack recommendation systems;Proceedings of the 22nd ACM international conference on Conference on information & knowledge management - CIKM '13;2013
3. One size does not fit all;Proceedings of the 22nd ACM international conference on Conference on information & knowledge management - CIKM '13;2013
4. Repeatable and Reliable Semantic Search Evaluation;SSRN Electronic Journal;2013
5. An analysis of human factors and label accuracy in crowdsourcing relevance judgments;Information Retrieval;2012-07-20