Crowd-sourced and expert video assessment in minimally invasive esophagectomy

Author:

Ketel Mirte H. M.ORCID,Klarenbeek Bastiaan R.,Eddahchouri Yassin,Cuesta Miguel A.,van Daele Elke,Gutschow Christian A.,Hölscher Arnulf H.,Hubka Michal,Luyer Misha D. P.,Merritt Robert E.,Nieuwenhuijzen Grard A. P.,Shen Yaxing,Abma Inger L.,Rosman Camiel,van Workum Frans

Abstract

Abstract Background Video-based assessment by experts may structurally measure surgical performance using procedure-specific competency assessment tools (CATs). A CAT for minimally invasive esophagectomy (MIE-CAT) was developed and validated previously. However, surgeon’s time is scarce and video assessment is time-consuming and labor intensive. This study investigated non-procedure-specific assessment of MIE video clips by MIE experts and crowdsourcing, collective surgical performance evaluation by anonymous and untrained laypeople, to assist procedure-specific expert review. Methods Two surgical performance scoring frameworks were used to assess eight MIE videos. First, global performance was assessed with the non-procedure-specific Global Operative Assessment of Laparoscopic Skills (GOALS) of 64 procedural phase-based video clips < 10 min. Each clip was assessed by two MIE experts and > 30 crowd workers. Second, the same experts assessed procedure-specific performance with the MIE-CAT of the corresponding full-length video. Reliability and convergent validity of GOALS for MIE were investigated using hypothesis testing with correlations (experience, blood loss, operative time, and MIE-CAT). Results Less than 75% of hypothesized correlations between GOALS scores and experience of the surgical team (r < 0.3), blood loss (r = − 0.82 to 0.02), operative time (r = − 0.42 to 0.07), and the MIE-CAT scores (r = − 0.04 to 0.76) were met for both crowd workers and experts. Interestingly, experts’ GOALS and MIE-CAT scores correlated strongly (r = 0.40 to 0.79), while crowd workers’ GOALS and experts’ MIE-CAT scores correlations were weak (r = − 0.04 to 0.49). Expert and crowd worker GOALS scores correlated poorly (ICC ≤ 0.42). Conclusion GOALS assessments by crowd workers lacked convergent validity and showed poor reliability. It is likely that MIE is technically too difficult to assess for laypeople. Convergent validity of GOALS assessments by experts could also not be established. GOALS might not be comprehensive enough to assess detailed MIE performance. However, expert’s GOALS and MIE-CAT scores strongly correlated indicating video clip (instead of full-length video) assessments could be useful to shorten assessment time. Graphical abstract

Funder

Ethicon Endo-Surgery

Publisher

Springer Science and Business Media LLC

Subject

Surgery

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3