Abstract
Automated chatbots powered by artificial intelligence (AI) can act as a ubiquitous point of contact, improving access to healthcare and empowering users to make effective decisions. However, despite the potential benefits, emerging literature suggests that apprehensions linked to the distinctive features of AI technology and the specific context of use (healthcare) could undermine consumer trust and hinder widespread adoption. Although the role of trust is considered pivotal to the acceptance of healthcare technologies, a dearth of research exists that focuses on the contextual factors that drive trust in such AI-based Chatbots for Self-Diagnosis (AICSD). Accordingly, a contextual model based on the trust-in-technology framework was developed to understand the determinants of consumers’ trust in AICSD and its behavioral consequences. It was validated using a free simulation experiment study in India (N = 202). Perceived anthropomorphism, perceived information quality, perceived explainability, disposition to trust technology, and perceived service quality influence consumers’ trust in AICSD. In turn, trust, privacy risk, health risk, and gender determine the intention to use. The research contributes by developing and validating a context-specific model for explaining trust in AICSD that could aid developers and marketers in enhancing consumers’ trust in and adoption of AICSD.
Publisher
Australian Journal of Information Systems