Author:
Chee Marcel Lucas,Chee Mark Leonard,Huang Haotian,Mazzochi Katie,Taylor Kieran,Wang Han,Feng Mengling,Ho Andrew Fu Wah,Siddiqui Fahad Javaid,Ong Marcus Eng Hock,Liu Nan
Abstract
AbstractIntroductionThe literature on the use of AI in prehospital emergency care (PEC) settings is scattered and diverse, making it difficult to understand the current state of the field. In this scoping review, we aim to provide a descriptive analysis of the current literature and to visualise and identify knowledge and methodological gaps using an evidence map.MethodsWe conducted a scoping review from inception until 14 December 2021 on MEDLINE, Embase, Scopus, IEEE Xplore, ACM Digital Library, and Cochrane Central Register of Controlled Trials (CENTRAL). We included peer-reviewed, original studies that applied AI to prehospital data, including applications for cardiopulmonary resuscitation (CPR), automated external defibrillation (AED), out-of-hospital cardiac arrest, and emergency medical service (EMS) infrastructure like stations and ambulances.ResultsThe search yielded 4350 articles, of which 106 met the inclusion criteria. Most studies were retrospective (n=88, 83·0%), with only one (0·9%) randomised controlled trial. Studies were mostly internally validated (n=96, 90·6%), and only ten studies (9·4%) reported on calibration metrics. While the most studied AI applications were Triage/Prognostication (n=52, 49·1%) and CPR/AED optimisation (n=26, 24·5%), a few studies reported unique use cases of AI such as patient-trial matching for research and Internet-of-Things (IoT) wearables for continuous monitoring. Out of 49 studies that identified a comparator, 39 reported AI performance superior to either clinicians or non-AI status quo algorithms. The minority of studies utilised multimodal inputs (n=37, 34·9%), with few models using text (n=8), audio (n=5), images (n=1), or videos (n=0) as inputs.ConclusionAI in PEC is a growing field and several promising use cases have been reported, including prognostication, demand prediction, resource optimisation, and IoT continuous monitoring systems. Prospective, externally validated studies are needed before applications can progress beyond the proof-of-concept stage to real-world clinical settings.FundingThis work was supported by the Duke-NUS Signature Research Programme funded by the Ministry of Health, Singapore.Research in contextEvidence before the studyThere has been growing research into artificial intelligence as a potential decision support tool in prehospital emergency care (PEC) settings. Previous reviews summarising AI research in emergency and critical care settings exist, some of which include prehospital care studies peripherally. However, the landscape of AI research in PEC has not been well characterised by any previous review. In this scoping review, we search six databases up to 14 December 2021 for eligible studies and summarise the evidence from 106 studies investigating AI applications in PEC settings.Added value of the studyTo our knowledge, our scoping review is the first to present a comprehensive analysis of the landscape of AI applications in PEC. It contributes to the field by highlighting the most studied AI applications and identifying the most common methodological approaches across 106 included studies. Our study examines the level of validation and comparative performance of AI application against clinicians or non-AI algorithms, which offers insight into the current efficacy of AI in PEC. We provide a unique contribution by visualising knowledge and methodological gaps in the field using an evidence map. This scoping review is a valuable resource for researchers and clinicians interested in the potential of AI in PEC and serves as a roadmap for future research.Implications of all the available evidenceOur findings reveal a promising future for AI in PEC, with many unique use cases and applications already showing good performance in internally validated studies. However, there is a need for more rigorous, prospective validation of AI applications before they can be implemented in clinical settings. This underscores the importance of explainable AI, which can improve clinicians’ trust in AI systems and encourage the validation of AI models in real-world settings.
Publisher
Cold Spring Harbor Laboratory