A 2.5D Self-Training Strategy for Carotid Artery Segmentation in T1-Weighted Brain Magnetic Resonance Images
-
Published:2024-07-03
Issue:7
Volume:10
Page:161
-
ISSN:2313-433X
-
Container-title:Journal of Imaging
-
language:en
-
Short-container-title:J. Imaging
Author:
de Araújo Adriel Silva1, Pinho Márcio Sarroglia1, Marques da Silva Ana Maria2ORCID, Fiorentini Luis Felipe34, Becker Jefferson56ORCID
Affiliation:
1. School of Technology, Pontifícia Universidade Católica do Rio Grande do Sul, Porto Alegre 90619-900, Brazil 2. Hospital das Clínicas, Faculdade de Medicina, Universidade de São Paulo, São Paulo 05403-010, Brazil 3. Centro de Diagnóstico por Imagem, Santa Casa de Misericórdia de Porto Alegre, Porto Alegre 90020-090, Brazil 4. Grupo Hospitalar Conceição, Porto Alegre 91350-200, Brazil 5. Hospital São Lucas, Pontifícia Universidade Católica do Rio Grande do Sul, Porto Alegre 90610-000, Brazil 6. Brain Institute, Pontifícia Universidade Católica do Rio Grande do Sul, Porto Alegre 90619-900, Brazil
Abstract
Precise annotations for large medical image datasets can be time-consuming. Additionally, when dealing with volumetric regions of interest, it is typical to apply segmentation techniques on 2D slices, compromising important information for accurately segmenting 3D structures. This study presents a deep learning pipeline that simultaneously tackles both challenges. Firstly, to streamline the annotation process, we employ a semi-automatic segmentation approach using bounding boxes as masks, which is less time-consuming than pixel-level delineation. Subsequently, recursive self-training is utilized to enhance annotation quality. Finally, a 2.5D segmentation technique is adopted, wherein a slice of a volumetric image is segmented using a pseudo-RGB image. The pipeline was applied to segment the carotid artery tree in T1-weighted brain magnetic resonance images. Utilizing 42 volumetric non-contrast T1-weighted brain scans from four datasets, we delineated bounding boxes around the carotid arteries in the axial slices. Pseudo-RGB images were generated from these slices, and recursive segmentation was conducted using a Res-Unet-based neural network architecture. The model’s performance was tested on a separate dataset, with ground truth annotations provided by a radiologist. After recursive training, we achieved an Intersection over Union (IoU) score of (0.68 ± 0.08) on the unseen dataset, demonstrating commendable qualitative results.
Funder
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brasil Novartis
Reference41 articles.
1. Zhu, K., Xiong, N.N., and Lu, M. (2023, January 6–8). A Survey of Weakly-supervised Semantic Segmentation. Proceedings of the 2023 IEEE 9th International Conference on Big Data Security on Cloud, IEEE International Conference on High Performance and Smart Computing, and IEEE International Conference on Intelligent Data and Security, BigDataSecurity-HPSC-IDS, New York, NY, USA. 2. A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains;Chan;Int. J. Comput. Vis.,2021 3. Kumar, A., Jiang, H., Imran, M., Valdes, C., Leon, G., Kang, D., Nataraj, P., Zhou, Y., Weiss, M.D., and Shao, W. (2024). A Flexible 2.5D Medical Image Segmentation Approach with In-Slice and Cross-Slice Attention. arXiv. 4. Carmo, D., Rittner, L., and Lotufo, R. (2022). Open-source tool for Airway Segmentation in Computed Tomography using 2.5D Modified EfficientDet: Contribution to the ATM22 Challenge. arXiv. 5. Avesta, A., Hossain, S., Lin, M., de Aboian, M., Krumholz, H.M., and Aneja, S. (2023). Comparing 3D, 2.5D, and 2D Approaches to Brain Image Auto-Segmentation. Bioengineering, 10.
|
|