BACKGROUND
There is an increasing interest to use routinely collected electronic health data to support reflective practice and long-term professional learning. Studies have evaluated the impact of dashboards on clinician decision-making, task completion time, user satisfaction, and adherence to clinical guidelines.
OBJECTIVE
The scoping review will summarize the literature on dashboards based on patient administrative, medical, and surgical data for clinicians to support reflective practice.
METHODS
A scoping review was conducted using the Arksey and O’Malley framework. A search was conducted in five electronic databases (MEDLINE, EMBASE, Scopus, ACM Digital Library, Web of Science) to identify studies that meet the inclusion criteria. Study selection and characterization were performed by two independent reviewers. One reviewer extracted the data that was analyzed descriptively to map the available evidence.
RESULTS
A total of 18 dashboards from eight countries were assessed. Purposes for the dashboards were designed for performance improvement (n=10), to support quality and safety initiatives (n=6), and management and operations (n=4). Data visualizations were primarily designed for team use (n=12) rather than individual clinicians (n=4). Evaluation methods varied between asking the clinicians directly (n=11), observing user behavior through clinical indicator and usage log data (n=14), and usability testing (n=4). The studies reported high scores from standard usability questionnaires, favorable surveys, and interview feedback. Improvements to underlying clinical indicators were observed in seven of nine studies, while two studies reported no significant changes to performance.
CONCLUSIONS
This scoping review maps the current landscape of literature on dashboards based on routinely collected clinical indicator data. While there were common data visualization techniques and clinical indicators used across studies, there was diversity in the design of the dashboards and their evaluation. There was a lack of detail in design processes documented for reproducibility. We identified a lack of interface features to support clinicians to make sense of and reflect on their performance data for long-term professional learning.