Evaluation of Google question-answering quality

Author:

Zhao Yiming,Zhang Jin,Xia Xue,Le Taowen

Abstract

Purpose The purpose of this paper is to evaluate Google question-answering (QA) quality. Design/methodology/approach Given the large variety and complexity of Google answer boxes in search result pages, existing evaluation criteria for both search engines and QA systems seemed unsuitable. This study developed an evaluation criteria system for the evaluation of Google QA quality by coding and analyzing search results of questions from a representative question set. The study then evaluated Google’s overall QA quality as well as QA quality across four target types and across six question types, using the newly developed criteria system. ANOVA and Tukey tests were used to compare QA quality among different target types and question types. Findings It was found that Google provided significantly higher-quality answers to person-related questions than to thing-related, event-related and organization-related questions. Google also provided significantly higher-quality answers to where- questions than to who-, what- and how-questions. The more specific a question is, the higher the QA quality would be. Research limitations/implications Suggestions for both search engine users and designers are presented to help enhance user experience and QA quality. Originality/value Particularly suitable for search engine QA quality analysis, the newly developed evaluation criteria system expanded and enriched assessment metrics of both search engines and QA systems.

Publisher

Emerald

Subject

Library and Information Sciences,Information Systems

Reference41 articles.

1. Learning search engine specific query transformations for question answering,2001

2. Alexa (2017), “The top 500 sites on the web”, available at: www.alexa.com/topsites/global;0 (accessed January 18, 2017).

3. A subjective measure of web search quality;Information Sciences—informatics & Computer Science: An International Journal,2005

4. University of Lethbridge’s participation in TREC 2007 QA track,2007

5. Clarke, C.L. (2009), “Web question answering”, in Liu, L. and Tamer, M. (Eds), Encyclopedia of Database Systems, Springer, New York, NY, pp. 3485-3490.

Cited by 11 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3