Abstract
Diabetic retinopathy (DR) is a chronic condition that can lead to significant vision loss and even blindness. Existing deep networks for hard exudate segmentation in fundus images face two primary challenges: (1) The receptive field of traditional convolution operations is limited, resulting in poor hard exudate extraction performance; (2) Due to the irregular distribution and different sizes of fine exudates, it is easy to lose information about tiny exudates during the feature extraction process. To address these challenges, we propose DBASNet, a novel lesion segmentation model. In order to solve the problem of insufficient segmentation caused by the limitations of the receptive field, we propose a new multi-scale attention feature extraction (MAT) module. Combined with the dual encoder structure, the features extracted by MAT and EfficientNet in the dual branches are fused to effectively expand the perceptual field and avoid information loss. We also propose an attentional skip connection (AS) module in the decoder to filter and retain channel and spatial information, enrich skip connections and carry feature information of tiny lesions. Experiments on publicly available datasets IDRiD and E-Ophtha-EX demonstrate effectiveness of our method. DBASNet achieves 79.48, 80.35, 79.81, and 66.64% of recall, precision, Dice, and IOU metrics on IDRiD and 52.73, 60.33, 56.16, and 39.82% on E-Ophtha-EX, respectively. DBASNet outperforms some state-of-the-art approaches. The quantitative and qualitative findings unequivocally establish the pre-eminence of DBASNet in the field of lesion segmentation relevant to diabetic retinopathy.