Abstract
The Text REtrieval Conference (TREC) question answering track is an effort to bring the
benefits of large-scale evaluation to bear on a question answering (QA) task. The track has
run twice so far, first in TREC-8 and again in TREC-9. In each case, the goal was to retrieve
small snippets of text that contain the actual answer to a question rather than the document
lists traditionally returned by text retrieval systems. The best performing systems were able to
answer about 70% of the questions in TREC-8 and about 65% of the questions in TREC-9.
While the 65% score is a slightly worse result than the TREC-8 scores in absolute terms, it
represents a very significant improvement in question answering systems. The TREC-9 task
was considerably harder than the TREC-8 task because TREC-9 used actual users’ questions
while TREC-8 used questions constructed for the track. Future tracks will continue to
challenge the QA community with more difficult, and more realistic, question answering tasks.
Publisher
Cambridge University Press (CUP)
Subject
Artificial Intelligence,Linguistics and Language,Language and Linguistics,Software
Cited by
82 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献