Understanding credibility judgements for web search snippets

Author:

Kattenbeck MarkusORCID,Elsweiler David

Abstract

Purpose It is well known that information behaviour can be biased in countless ways and that users of web search engines have difficulty in assessing the credibility of results. Yet, little is known about how search engine result page (SERP) listings are used to judge credibility and in which if any way such judgements are biased. The paper aims to discuss these issues. Design/methodology/approach Two studies are presented. The first collects data by means of a controlled, web-based user study (N=105). Studying judgements for three controversial topics, the paper examines the extent to which users agree on credibility, the extent to which judgements relate to those applied by objective assessors and to what extent judgements can be predicted by the users’ position on and prior knowledge of the topic. A second, qualitative study (N=9) utilises the same setup; however, transcribed think-aloud protocols provide an understanding of the cues participants use to estimate credibility. Findings The first study reveals that users are very uncertain when assessing credibility and their impressions often diverge from objective judges who have fact checked the sources. Little evidence is found indicating that judgements are biased by prior beliefs or knowledge, but differences are observed in the accuracy of judgements across topics. Qualitatively analysing think-aloud transcripts from participants think-aloud reveals ten categories of cues, which participants used to determine the credibility of results. Despite short listings, participants utilised diverse cues for the same listings. Even when the same cues were identified and utilised, different participants often interpreted these differently. Example transcripts show how participants reach varying conclusions, illustrate common mistakes made and highlight problems with existing SERP listings. Originality/value This study offers a novel perspective on how the credibility of SERP listings is interpreted when assessing search results. Especially striking is how the same short snippets provide diverse informational cues and how these cues can be interpreted differently depending on the user and his or her background. This finding is significant in terms of how search engine results should be presented and opens up the new challenge of discovering technological solutions, which allow users to better judge the credibility of information sources on the web.

Publisher

Emerald

Subject

Library and Information Sciences,Information Systems

Reference52 articles.

1. Manipulating the perception of credibility in refugee related social media posts,2017

2. Information search and re-access strategies of experienced web users,2005

3. The search dashboard: how reflection and comparison impact search behavior,2012

4. Twitter mood predicts the stock market;Journal of Computational Science,2011

5. The effect of cognitive abilities on information search for tasks of varying levels of complexity,2014

Cited by 24 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3