Enhancing Medical Image Segmentation: Ground Truth Optimization through Evaluating Uncertainty in Expert Annotations
-
Published:2023-09-02
Issue:17
Volume:11
Page:3771
-
ISSN:2227-7390
-
Container-title:Mathematics
-
language:en
-
Short-container-title:Mathematics
Author:
Athanasiou Georgios1ORCID, Arcos Josep Lluis1ORCID, Cerquides Jesus1ORCID
Affiliation:
1. Artificial Intelligence Research Institute (IIIA), Spanish National Research Council (CSIC), Campus Autonomous University of Barcelona (UAB), 08193 Barcelona, Spain
Abstract
The surge of supervised learning methods for segmentation lately has underscored the critical role of label quality in predicting performance. This issue is prevalent in the domain of medical imaging, where high annotation costs and inter-observer variability pose significant challenges. Acquiring labels commonly involves multiple experts providing their interpretations of the “true” segmentation labels, each influenced by their individual biases. The blind acceptance of these noisy labels as the ground truth restricts the potential effectiveness of segmentation algorithms. Here, we apply coupled convolutional neural network approaches to a small-sized real-world dataset of bovine cumulus oocyte complexes. This is the first time these methods have been applied to a real-world annotation medical dataset, since they were previously tested only on artificially generated labels of medical and non-medical datasets. This dataset is crucial for healthy embryo development. Its application revealed an important challenge: the inability to effectively learn distinct confusion matrices for each expert due to large areas of agreement. In response, we propose a novel method that focuses on areas of high uncertainty. This approach allows us to understand the individual characteristics better, extract their behavior, and use this insight to create a more sophisticated ground truth using maximum likelihood. These findings contribute to the ongoing discussion of leveraging machine learning algorithms for medical image segmentation, particularly in scenarios involving multiple human annotators.
Funder
Marie Skłodowska-Curie Spanish Ministry of Science and Innovation
Subject
General Mathematics,Engineering (miscellaneous),Computer Science (miscellaneous)
Reference28 articles.
1. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS);Menze;IEEE Trans. Med. Imaging,2015 2. Ronneberger, O., Fischer, P., and Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv. 3. Azad, R., Aghdam, E.K., Rauland, A., Jia, Y., Avval, A.H., Bozorgpour, A., Karimijafarbigloo, S., Cohen, J.P., Adeli, E., and Merhof, D. (2022). Medical Image Segmentation Review: The success of U-Net. arXiv. 4. Harvey, H., and Glocker, B. (2019). Artificial Intelligence in Medical Imaging, Springer. 5. Nguyen, N.T.T., and Le, P.B. (2022). Topological Voting Method for Image Segmentation. J. Imaging, 8.
|
|