BACKGROUND
The exponential growth in computing power and increasing digitization of information have advanced the machine learning (ML) research field substantially. However, ML algorithms are often considered “black boxes”, and this fosters distrust. In medical domains, in which mistakes can result in fatal outcomes, practitioners may be especially reluctant to trust ML algorithms.
OBJECTIVE
To explore the effect of user-interface design features on intensivists’ trust in a ML-based clinical decision support system.
METHODS
Forty-seven physicians from critical care specialties were presented three patient cases of bacteremia in the setting of an ML-based simulation system. Three conditions of the simulation were tested according to combinations of information relevancy and interactivity. Participants’ trust in the system was assessed by their agreement with the system’s diagnoses and a post-experiment questionnaire. Linear regression models were applied to measure the effects
RESULTS
Participants’ agreement with the system’s diagnoses did not differ according to the experimental conditions. However, in the post-experiment questionnaire, higher information relevancy ratings and interactivity ratings were associated with higher perceived trust in the system (P < 0.001 for both). The explicit visual presentation of the features of the ML algorithm on the user-interface resulted in lower trust by the participants (P < 0.05).
CONCLUSIONS
: Information relevancy and interactivity features should be considered in the design of user interface of ML-based clinical decision support systems, to enhance intensivists’ trust. This study sheds light on the connection between information relevancy, interactivity, and trust in human–ML interaction, specifically in the intensive care unit environment.
CLINICALTRIAL
Non- clinical