Comparing Basic Life Support Serious Gaming Scores With Hands-on Training Platform Performance Scores: Pilot Simulation Study for Basic Life Support Training

Author:

Aksoy Mehmet EminORCID

Abstract

Background Serious games enrich simulation-based health care trainings and improve knowledge, skills, and self-confidence of learners while entertaining them. Objective A platform which can combine performance data from a basic life support (BLS) serious game app and hands-on data based on the same scoring system is not available in the market. The aim of this study was to create such a platform and investigate whether performance evaluation of BLS trainings would be more objective compared to conventional Objective Structured Clinical Examination (OSCE) examinations if these evaluations were carried out with the platform which combines OSCE scoring criteria with sensor data retrieved from the simulator’s sensors. Methods Participants were 25 volunteers (11 men [44.0%] and 14 [56.0] women) among Acıbadem Mehmet Ali Aydınlar University students without prior knowledge of the BLS protocol. A serious game module has been created for teaching learners the European Resuscitation Council Basic Life Support 2015 protocol. A second module called the hands-on module was designed for educators. This module includes a checklist used for BLS OSCE examinations and can retrieve sensor data such as compression depth, compression frequency, and ventilation volume from the manikin (CPR Lilly; 3B Scientific GmbH) via Bluetooth. Data retrieved from the sensors of the manikin enable educators to evaluate learners in a more objective way. Performance data retrieved from the serious gaming module have been combined with the results of the hands-on module. Data acquired from the hands-on module have also been compared with the results of conventional OSCE scores of the participants, which were obtained by watching the videos of the same trainings. Results Participants were considered successful in the game if they scored 80/100 or above. Overall, participants scored 80 or above in an average of 1.4 (SD 0.65) trials. The average BLS serious game score was 88.3/100 (SD 5.17) and hands-on average score was 70.7/100 (SD 17.3), whereas the OSCE average score was 84.4/100 (SD 12.9). There was no statistically significant correlation between success on trials (score ≥80/100), serious game, hands-on training app, and OSCE scores (Spearman rho test, P>.05). The mean BLS serious game score of the participants was 88.3/100 (SD 5.17), whereas their mean hands-on training app score was 70.7/100 (SD 17.3) and OSCE score was 84.4/100 (SD 12.9). Conclusions Although scoring criteria for OSCE and hands-on training app were identical, OSCE scores were 17% higher than hands-on training app scores. After analyzing the difference of scores between hands-on training app and OSCE, it has been revealed that these differences originate from scoring parameters such as compression depth, compression frequency, and ventilation volume. These data suggest that evaluation of BLS trainings would be more objective if these evaluations were carried out with the modality, which combines visual OSCE scoring criteria with sensor data retrieved from the simulator’s sensors. Trial Registration ClinicalTrials.gov NCT04533893; https://clinicaltrials.gov/ct2/show/NCT04533893

Publisher

JMIR Publications Inc.

Subject

Psychiatry and Mental health,Computer Science Applications,Rehabilitation,Biomedical Engineering,Physical Therapy, Sports Therapy and Rehabilitation

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3