BACKGROUND
Clinical decision support systems (CDSSs) have great potential to improve healthcare performance and efficiency, increase safety, and reduce costs. Artificial intelligence-enabled CDSSs (AI-CDSSs) enhance decision-making ability by introducing new AI technologies such as deep neural networks and knowledge graphs. However, factors influencing user acceptance of AI-CDSSs by doctors, especially those from well-known tertiary hospitals in China, are unclear.
OBJECTIVE
This study aimed to analyze the factors influencing doctors’ acceptance of AI-CDSSs in well-known tertiary hospitals in China.
METHODS
The unified theory of acceptance and use of technology (UTAUT) model was extended to propose a hypothesized model based on the perspective of doctors’ acceptance of AI-CDSSs. We conducted a web-based survey of doctors from four well-known tertiary hospitals in Liaoning, Zhejiang, Sichuan, and Guangdong provinces. We developed and evaluated a 25-item measurement scale. We used partial least squares structural equation modeling to analyze the data.
RESULTS
A total of 187 doctors completed the survey, and 137 questionnaires were considered effective based on a quality control strategy. Psychometric evaluation of the scale gave a Cronbach α value of 0.932 and corrected item to total correlation values ranging from 0.467 to 0.744. The average variance of extracted values ranged from 0.628 to 0.782, composite reliability values ranged from 0.871 to 0.931, and heterotrait-monotrait ratio values ranged from 0.254 to 0.845. In model testing, the variance inflation factor ranged from 1 to 2.21, the standardized root mean square residual value was 0.055, the squared Euclidean distance (0.995) and the geodesic distance (0.566) were less than the corresponding upper boundary of the 95% confidence interval (0.997 and 0.647). The final model had an explanatory power of 73.1% for user acceptance.
CONCLUSIONS
Doctors’ user acceptance of AI-CDSSs is strongly influenced by effort expectancy, is moderately influenced by trust in (medical) AI and is weakly influenced by social factors. Performance expectancy does not directly influence doctors' acceptance. However, it could indirectly affect doctors' acceptance by trust in (medical) AI as an intermediary. Moreover, trust in (medical) AI as an important new factor in our extended UTAUT model is moderately influenced by social factors and is weakly influenced by effort expectancy and performance expectancy. Finally, effort expectancy is influenced by personal innovativeness, while performance expectancy is influenced by task-technology fit.