Author:
Celentano Valerio,Smart Neil,Cahill Ronan A.,Spinelli Antonino,Giglio Mariano Cesare,McGrath John,Obermair Andreas,Pellino Gianluca,Hasegawa Hirotoshi,Lal Pawanindra,Lorenzon Laura,De Angelis Nicola,Boni Luigi,Gupta Sharmila,Griffith John P.,Acheson Austin G.,Cecil Tom D.,Coleman Mark G.
Abstract
Abstract
Introduction
There has been a constant increase in the number of published surgical videos with preference for open-access sources, but the proportion of videos undergoing peer-review prior to publication has markedly decreased, raising questions over quality of the educational content presented. The aim of this study was the development and validation of a standard framework for the appraisal of surgical videos submitted for presentation and publication, the LAParoscopic surgery Video Educational GuidelineS (LAP-VEGaS) video assessment tool.
Methods
An international committee identified items for inclusion in the LAP-VEGaS video assessment tool and finalised the marking score utilising Delphi methodology. The tool was finally validated by anonymous evaluation of selected videos by a group of validators not involved in the tool development.
Results
9 items were included in the LAP-VEGaS video assessment tool, with every item scoring from 0 (item not presented in the video) to 2 (item extensively presented in the video), with a total marking score ranging from 0 to 18. The LAP-VEGaS video assessment tool resulted highly accurate in identifying and selecting videos for acceptance for conference presentation and publication, with high level of internal consistency and generalisability.
Conclusions
We propose that peer review in adherence to the LAP-VEGaS video assessment tool could enhance the overall quality of published video outputs.
Graphic Abstract
Publisher
Springer Science and Business Media LLC
Cited by
44 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献