Abstract
Diabetic retinopathy (DR) is a visual obstacle caused by diabetic disease, which forms because of long-standing diabetes mellitus, which damages the retinal blood vessels. This disease is considered one of the principal causes of sightlessness and accounts for more than 158 million cases all over the world. Since early detection and classification could diminish the visual impairment, it is significant to develop an automated DR diagnosis method. Although deep learning models provide automatic feature extraction and classification, training such models from scratch requires a larger annotated dataset. The availability of annotated training datasets is considered a core issue for implementing deep learning in the classification of medical images. The models based on transfer learning are widely adopted by the researchers to overcome annotated data insufficiency problems and computational overhead. In the proposed study, features are extracted from fundus images using the pre-trained network VGGNet and combined with the concept of transfer learning to improve classification performance. To deal with data insufficiency and unbalancing problems, we employed various data augmentation operations differently on each grade of DR. The results of the experiment indicate that the proposed framework (which is evaluated on the benchmark dataset) outperformed advanced methods in terms of accurateness. Our technique, in combination with handcrafted features, could be used to improve classification accuracy.
Funder
Beijing Natural Science Foundation
Cited by
39 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献