Abstract
PurposeThe existing technology acceptance models have not yet investigated functional and motivational factors impacting trust in and use of conversational artificial intelligence (AI) by integrating the feedback and sequential updating mechanisms. This study challenged the existing models and constructed an integrated longitudinal model. Using a territory-wide two-wave survey of a representative sample, this new model examined the effects of hedonic motivation, social motivation, perceived ease of use, and perceived usefulness on continued trust, intended use, and actual use of conversational AI.Design/methodology/approachAn autoregressive cross-lagged model was adopted to test the structural associations of the seven repeatedly measured constructs.FindingsThe results revealed that trust in conversational AI positively affected continued actual use, hedonic motivation increased continued intended use, and social motivation and perceived ease of use enhanced continued trust in conversational AI. While the original technology acceptance model was unable to explain the continued acceptance of conversational AI, the findings showed positive feedback effects of actual use on continued intended use. Except for trust, the sequential updating effects of all the measured factors were significant.Originality/valueThis study intended to contribute to the technology acceptance and human–AI interaction paradigms by developing a longitudinal model of continued acceptance of conversational AI. This new model adds to the literature by considering the feedback and sequential updating mechanisms in understanding continued conversational AI acceptance.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献