Abstract
Abstract
Medical image segmentation is essential to image-based disease analysis and has proven to be significantly helpful for doctors to make decisions. Due to the low-contrast of some medical images, the accurate segmentation of medical images has always been a challenging problem. The experiment found that UNet with current loss functions cannot capture subtle information in target contours or regions in low-contrast medical images, which are crucial for subsequent disease diagnosis. We propose a robust loss by incorporating the difference in average radial derivative (ARD), length and region area to further help the network to achieve more accurate segmentation results. We evaluated the proposed loss function using UNet as the base segmentation network compared to five conventional loss functions on one private and four public medical image datasets. Experimental results illustrate that UNet with the proposed loss function can achieve the best segmentation performance, even better than the outstanding deep learning models with original loss functions. Furthermore, three representative datasets were chosen to validate the effectiveness of the proposed δARD loss function with seven different models. The experiments revealed δARD loss’s plug-and-play feature and its robustness over multiple models and datasets.
Funder
Natural Science Foundation of Liaoning Province
Reference30 articles.
1. Busis: a benchmark for breast ultrasound image segmentation;Zhang;Healthcare,2022
2. Loss odyssey in medical image segmentation;Jun;Med. Image Anal.,2021
3. Learning active contour models for medical image segmentation;Chen,2019
4. Boundary loss for highly unbalanced segmentation;Kervadec;Med. Image Anal.,2021
5. Boundary loss for highly unbalanced segmentation;Kervadec,2019