BACKGROUND
During the COVID-19 pandemic, medical laypersons with symptoms indicative of a COVID-19 infection commonly seek guidance on whether and where to seek medical care. Numerous web-based decision support tools (DSTs) have been developed, both by public and commercial stakeholders, to assist their decision-making. Though most of the DST’s underlying algorithms are similar and simple decision trees, their mode of presentation differs: some DSTs present a static flowchart, while others are designed as a conversational agent, guiding the user through the decision tree’s node step-by-step in an interactive manner.
OBJECTIVE
To investigate whether interactive DSTs provide greater decision support than non-interactive (ie, static) flowcharts.
METHODS
We developed mock interfaces for two DST (one static, one interactive), mimicking patient-facing, freely available DSTs for COVID-19 related self-assessment. Their underlying algorithm was identical and based on the Center for Disease Control’s guidelines. We recruited adult US residents online in November 2020. Participants appraised the appropriate social and care-seeking behavior for seven fictitious descriptions of patients (case vignettes). Participants in the experimental groups received either the static or interactive mock DST as support, while the control group appraised the case vignettes unsupported. We determined participants’ accuracy, decision certainty (after deciding) and mental effort to measure quality of decision support. Participants’ ratings of the DSTs’ usefulness, ease of use, trust and future intention to use the tools served as measures to analyze differences in participants’ perception of the tools. We used ANOVAs and t-tests to assess statistical significance.
RESULTS
Our survey yielded 196 responses. The mean number of correct assessments was higher in the intervention groups (interactive DST group: M=11.71, SD=2.37; static DST group: M=11.45, SD=2.48) than in the control group (M=10.17, SD=2.00). Decisional certainty was significantly higher in the experimental groups (interactive DST group: M=80.7%, SD=14.1%; static DST group: M=80.5%, SD=15.8%) compared to the control group (M=65.8%, SD=20.8%). The differences in these measures proved statistically significant in t-tests comparing each intervention group with the control group (p<.001 for all four t tests). The ANOVA detected no significant differences regarding mental effort between the three study groups. Differences between the two intervention groups were of small effect sizes and non-significant for all three measures of quality of decision support and most measures of users’ perception of the DSTs.
CONCLUSIONS
When the decision space is limited as is the case in common COVID-19 self-assessment DSTs, static flowcharts might prove as beneficial in enhancing decision quality as interactive tools. Given that static flowcharts reveal the underlying decision algorithm more transparently and require less effort to develop, they might prove more efficient in providing guidance to the public. Further research should validate our findings on different use cases, elaborate on the trade-off between transparency and convenience in DSTs, and investigate whether subgroups of users benefit more of one type of user interface than the other.
CLINICALTRIAL