BACKGROUND
Evidence-based point-of-care information (POCI) tools can facilitate patient safety and care by helping clinicians to answer disease state and drug information questions in less time and with less effort. However, these tools may also be visually challenging to navigate or lack the comprehensiveness needed to sufficiently address a medical issue.
OBJECTIVE
This study aimed to collect clinicians’ feedback and directly observe their use of the combined POCI tool DynaMed and Micromedex with Watson, now known as DynaMedex. EBSCO partnered with IBM Watson Health, now known as Merative, to develop the combined tool as a resource for clinicians. We aimed to identify areas for refinement based on participant feedback and examine participant perceptions to inform further development.
METHODS
Participants (N=43) within varying clinical roles and specialties were recruited from Brigham and Women’s Hospital and Massachusetts General Hospital in Boston, Massachusetts, United States, between August 10, 2021, and December 16, 2021, to take part in usability sessions aimed at evaluating the efficiency and effectiveness of, as well as satisfaction with, the DynaMed and Micromedex with Watson tool. Usability testing methods, including think aloud and observations of user behavior, were used to identify challenges regarding the combined tool. Data collection included measurements of time on task; task ease; satisfaction with the answer; posttest feedback on likes, dislikes, and perceived reliability of the tool; and interest in recommending the tool to a colleague.
RESULTS
On a 7-point Likert scale, pharmacists rated ease (mean 5.98, SD 1.38) and satisfaction (mean 6.31, SD 1.34) with the combined POCI tool higher than the physicians, nurse practitioner, and physician’s assistants (ease: mean 5.57, SD 1.64, and satisfaction: mean 5.82, SD 1.60). Pharmacists spent longer (mean 2 minutes, 26 seconds, SD 1 minute, 41 seconds) on average finding an answer to their question than the physicians, nurse practitioner, and physician’s assistants (mean 1 minute, 40 seconds, SD 1 minute, 23 seconds).
CONCLUSIONS
Overall, the tool performed well, but this usability evaluation identified multiple opportunities for improvement that would help inexperienced users.