Abstract
ABSTRACTPurposePupillary instability is a known risk factor for complications in cataract surgery. This study aims to develop and validate an innovative and reliable computational framework for the automated assessment of pupil morphologic changes during the various phases of cataract surgery.DesignRetrospective surgical video analysis.SubjectsTwo hundred forty complete surgical video recordings, among which 190 surgeries were conducted without the use of pupil expansion devices and 50 were performed with the use of a pupil expansion device.MethodsThe proposed framework consists of three stages: feature extraction, deep learning (DL)-based anatomy recognition, and obstruction detection/compensation. In the first stage, surgical video frames undergo noise reduction using a tensor-based wavelet feature extraction method. In the second stage, DL-based segmentation models are trained and employed to segment the pupil, limbus, and palpebral fissure. In the third stage, obstructed visualization of the pupil is detected and compensated for using a DL-based algorithm. A dataset of 5,700 intraoperative video frames across 190 cataract surgeries in the BigCat database was collected for validating algorithm performance.Main Outcome MeasuresThe pupil analysis framework was assessed on the basis of segmentation performance for both obstructed and unobstructed pupils. Classification performance of models utilizing the segmented pupil time series to predict surgeon use of a pupil expansion device was also assessed.ResultsAn architecture based on the FPN model with VGG16 backbone integrated with the AWTFE feature extraction method demonstrated the highest performance in anatomy segmentation, with Dice coefficient of 96.52%. Incorporation of an obstruction compensation algorithm improved performance further (Dice 96.82%). Downstream analysis of framework output enabled the development of an SVM-based classifier that could predict surgeon usage of a pupil expansion device prior to its placement with 96.67% accuracy and AUC of 99.44%.ConclusionsThe experimental results demonstrate that the proposed framework 1) provides high accuracy in pupil analysis compared to human-annotated ground truth, 2) substantially outperforms isolated use of a DL segmentation model, and 3) can enable downstream analytics with clinically valuable predictive capacity.
Publisher
Cold Spring Harbor Laboratory