Abstract
Artificial Intelligence has showcased clear capabilities to automatically grade diabetic retinopathy (DR) on mydriatic retinal images captured by clinical experts on fixed table-top retinal cameras within hospital settings. However, in many low- and middle-income countries, screening for DR revolves around minimally trained field workers using handheld non-mydriatic cameras in community settings. This prospective study evaluated the diagnostic accuracy of a deep learning algorithm developed using mydriatic retinal images by the Singapore Eye Research Institute, commercially available as Zeiss VISUHEALTH-AI DR, on images captured by field workers on a Zeiss Visuscout® 100 non-mydriatic handheld camera from people with diabetes in a house-to-house cross-sectional study across 20 regions in India. A total of 20,489 patient eyes from 11,199 patients were used to evaluate algorithm performance in identifying referable DR, non-referable DR, and gradability. For each category, the algorithm achieved precision values of 29.60 (95% CI 27.40, 31.88), 92.56 (92.13, 92.97), and 58.58 (56.97, 60.19), recall values of 62.69 (59.17, 66.12), 85.65 (85.11, 86.18), and 65.06 (63.40, 66.69), and F-score values of 40.22 (38.25, 42.21), 88.97 (88.62, 89.31), and 61.65 (60.50, 62.80), respectively. Model performance reached 91.22 (90.79, 91.64) sensitivity and 65.06 (63.40, 66.69) specificity at detecting gradability and 72.08 (70.68, 73.46) sensitivity and 85.65 (85.11, 86.18) specificity for the detection of all referable eyes. Algorithm accuracy is dependent on the quality of acquired retinal images, and this is a major limiting step for its global implementation in community non-mydriatic DR screening using handheld cameras. This study highlights the need to develop and train deep learning-based screening tools in such conditions before implementation.
Funder
UK Research and Innovation