Author:
VanVoorst Brian R.,Walczak Nicholas R.,Hackett Matthew G.,Norfleet Jack E.,Schewe Jon P.,Fasching Joshua S.
Abstract
Introduction
Within any training event, debriefing is a vital component that highlights areas of proficiency and deficiency, enables reflection, and ultimately provides opportunity for remediation. Video-based debriefing is desirable to capture performance and replay events, but the reality is rife with challenges, principally lengthy video and occlusions that block line of sight from camera equipment to participants.
Methods
To address this issue, researchers automated the editing of a video debrief, using a system of person-worn cameras and computer vision techniques. The cameras record a simulation event, and the video is processed using computer vision. Researchers investigated a variety of computer vision techniques, ultimately focusing on the scale invariant feature transform detection method and a convolutional neural network. The system was trained to detect and tag medically relevant segments of video and assess a single exemplar medical intervention, in this case the application of a tourniquet.
Results
The system tagged medically relevant video segments with 92% recall and 66% precision, resulting in an F1 (harmonic mean of precision and recall) of 72% (N = 23). The exemplar medical intervention was successfully assessed in 39.5% of videos (N = 39).
Conclusion
The system showed suitable accuracy tagging medically relevant video segments, but requires additional research to improve medical intervention assessment accuracy. Computer vision has the potential to automate video debrief creation to augment existing debriefing strategies.
Publisher
Ovid Technologies (Wolters Kluwer Health)
Subject
Modeling and Simulation,Education,Medicine (miscellaneous),Epidemiology
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献