Age Encoded Adversarial Learning for Pediatric CT Segmentation
-
Published:2024-03-27
Issue:4
Volume:11
Page:319
-
ISSN:2306-5354
-
Container-title:Bioengineering
-
language:en
-
Short-container-title:Bioengineering
Author:
Gheshlaghi Saba Heidari1, Kan Chi Nok Enoch2, Schmidt Taly Gilat3, Ye Dong Hye4ORCID
Affiliation:
1. Department of Computer Science, Marquette University, Milwaukee, WI 53233, USA 2. Department of Electrical and Computer Engineering, Marquette University, Milwaukee, WI 53233, USA 3. Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI 53233, USA 4. Department of Computer Science, Georgia State University, Atlanta, GA 30303, USA
Abstract
Organ segmentation from CT images is critical in the early diagnosis of diseases, progress monitoring, pre-operative planning, radiation therapy planning, and CT dose estimation. However, data limitation remains one of the main challenges in medical image segmentation tasks. This challenge is particularly huge in pediatric CT segmentation due to children’s heightened sensitivity to radiation. In order to address this issue, we propose a novel segmentation framework with a built-in auxiliary classifier generative adversarial network (ACGAN) that conditions age, simultaneously generating additional features during training. The proposed conditional feature generation segmentation network (CFG-SegNet) was trained on a single loss function and used 2.5D segmentation batches. Our experiment was performed on a dataset with 359 subjects (180 male and 179 female) aged from 5 days to 16 years and a mean age of 7 years. CFG-SegNet achieved an average segmentation accuracy of 0.681 dice similarity coefficient (DSC) on the prostate, 0.619 DSC on the uterus, 0.912 DSC on the liver, and 0.832 DSC on the heart with four-fold cross-validation. We compared the segmentation accuracy of our proposed method with previously published U-Net results, and our network improved the segmentation accuracy by 2.7%, 2.6%, 2.8%, and 3.4% for the prostate, uterus, liver, and heart, respectively. The results indicate that our high-performing segmentation framework can more precisely segment organs when limited training images are available.
Funder
National Institute of Health
Reference42 articles.
1. Ali, M., Magee, D., and Dasgupta, U. (2008). Signal Processing Overview of Ultrasound Systems for Medical Imaging, Texas Instruments. SPRAB12. 2. Foomani, F.H., Anisuzzaman, D., Niezgoda, J., Niezgoda, J., Guns, W., Gopalakrishnan, S., and Yu, Z. (2022). Synthesizing time-series wound prognosis factors from electronic medical records using generative adversarial networks. J. Biomed. Inform., 125. 3. Deep learning in medical image analysis;Shen;Annu. Rev. Biomed. Eng.,2017 4. Islam, M.T., Siddique, B.N.K., Rahman, S., and Jabid, T. (2018, January 21–24). Image recognition with deep learning. Proceedings of the 2018 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS), Bangkok, Thailand. 5. Malekzadeh, M., Hajibabaee, P., Heidari, M., Zad, S., Uzuner, O., and Jones, J.H. (2021, January 1–4). Review of Graph Neural Network in Text Classification. Proceedings of the 2021 IEEE 12th Annual Ubiquitous Computing, Electronics Mobile Communication Conference (UEMCON), New York, NY, USA.
|
|