Affiliation:
1. University of Illinois at Urbana-Champaign
2. Juji. Inc.
3. IBM Research AI
4. University of California
Abstract
The rise of increasingly more powerful chatbots offers a new way to collect information through conversational surveys, where a chatbot asks open-ended questions, interprets a user’s free-text responses, and probes answers whenever needed. To investigate the effectiveness and limitations of such a chatbot in conducting surveys, we conducted a field study involving about 600 participants. In this study with mostly open-ended questions, half of the participants took a typical online survey on Qualtrics and the other half interacted with an AI-powered chatbot to complete a conversational survey. Our detailed analysis of over 5,200 free-text responses revealed that the chatbot drove a significantly higher level of participant engagement and elicited significantly better quality responses measured by Gricean Maxims in terms of their informativeness, relevance, specificity, and clarity. Based on our results, we discuss design implications for creating AI-powered chatbots to conduct effective surveys and beyond.
Funder
Air Force Office of Scientific Research
Publisher
Association for Computing Machinery (ACM)
Subject
Human-Computer Interaction
Reference97 articles.
1. Design and analysis: A researcher’s handbook;Aaker David A.;Journal of Marketing Research,1976
2. Toward conversational human-computer interaction;Allen James F.;AI Magazine,2001
3. Effects of Mobile versus PC Web on Survey Response Quality
4. Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots
Cited by
65 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献