Evaluation of Anesthesia Residents Using Mannequin-based Simulation

Author:

Schwid Howard A.1,Rooke G. Alec1,Carline Jan2,Steadman Randolph H.3,Murray W. Bosseau4,Olympio Michael5,Tarver Stephen6,Steckner Karen7,Wetstone Susan8,

Affiliation:

1. Professor of Anesthesiology, Department of Anesthesiology, University of Washington, and Staff Anesthesiologist, VA Puget Sound Health Care Service.

2. Professor of Medical Education, Department of Medical Education, University of Washington.

3. Associate Clinical Professor of Anesthesiology and Vice Chair, Department of Anesthesiology, University of California-Los Angeles, Los Angeles, California.

4. Professor of Anesthesiology, Department of Anesthesiology, Pennsylvania State University, Hershey, Pennsylvania.

5. Associate Professor of Anesthesiology, Department of Anesthesiology, Wake Forest University, Winston-Salem, North Carolina.

6. Associate Professor of Anesthesiology, Department of Anesthesiology, University of Kansas, Kansas City, Kansas.

7. Staff Anesthesiologist, Department of General Anesthesia, Cleveland Clinic, Cleveland, Ohio. †† Medical Student, University of Washington School of Medicine.

8. Members of the Anesthesia Simulator Research Consortium are listed in Appendix A.

Abstract

Background Anesthesia simulators can generate reproducible, standardized clinical scenarios for instruction and evaluation purposes. Valid and reliable simulated scenarios and grading systems must be developed to use simulation for evaluation of anesthesia residents. Methods After obtaining Human Subjects approval at each of the 10 participating institutions, 99 anesthesia residents consented to be videotaped during their management of four simulated scenarios on MedSim or METI mannequin-based anesthesia simulators. Using two different grading forms, two evaluators at each department independently reviewed the videotapes of the subjects from their institution to score the residents' performance. A third evaluator, at an outside institution, reviewed the videotape again. Statistical analysis was performed for construct- and criterion-related validity, internal consistency, interrater reliability, and intersimulator reliability. A single evaluator reviewed all videotapes a fourth time to determine the frequency of certain management errors. Results Even advanced anesthesia residents nearing completion of their training made numerous management errors; however, construct-related validity of mannequin-based simulator assessment was supported by an overall improvement in simulator scores from CB and CA-1 to CA-2 and CA-3 levels of training. Subjects rated the simulator scenarios as realistic (3.47 out of possible 4), further supporting construct-related validity. Criterion-related validity was supported by moderate correlation of simulator scores with departmental faculty evaluations (0.37-0.41, P < 0.01), ABA written in-training scores (0.44-0.49, < 0.01), and departmental mock oral board scores (0.44-0.47, P < 0.01). Reliability of the simulator assessment was demonstrated by very good internal consistency (alpha = 0.71-0.76) and excellent interrater reliability (correlation = 0.94-0.96; P < 0.01; kappa = 0.81-0.90). There was no significant difference in METI versus MedSim scores for residents in the same year of training. Conclusions Numerous management errors were identified in this study of anesthesia residents from 10 institutions. Further attention to these problems may benefit residency training since advanced residents continued to make these errors. Evaluation of anesthesia residents using mannequin-based simulators shows promise, adding a new dimension to current assessment methods. Further improvements are necessary in the simulation scenarios and grading criteria before mannequin-based simulation is used for accreditation purposes.

Publisher

Ovid Technologies (Wolters Kluwer Health)

Subject

Anesthesiology and Pain Medicine

Reference17 articles.

Cited by 160 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3