Affiliation:
1. Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School
2. Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School
Abstract
Abstract
Manual segmentation of tumors and organs-at-risk (OAR) in 3D imaging for radiation-therapy planning is time-consuming and subject to variation between different observers. Artificial intelligence (AI) can assist with segmentation, but challenges exist in ensuring high-quality segmentation, especially for small, variable structures. We investigated the effect of variation in segmentation quality and style of physicians for training deep-learning models for esophagus segmentation and proposed a new metric, edge roughness, for evaluating/quantifying slice-to-slice inconsistency.
This study includes a real-world cohort of 394 patients who each received radiation therapy (mainly for lung cancer). Segmentation of the esophagus was performed by 8 physicians as part of routine clinical care. We evaluated manual segmentation by comparing the length and edge roughness of segmentations among physicians to analyze inconsistencies. We trained six multiple- and individual-physician segmentation models in total, based on U-Net architectures and residual backbones. We used the volumetric Dice coefficient to measure the performance for each model. We proposed a metric, edge roughness, to quantify the shift of segmentation among adjacent slices by calculating the curvature of edges of the 2D sagittal- and coronal-view projections.
The auto-segmentation model trained on multiple physicians (MD1-7) achieved the highest mean Dice of 73.7±14.8%. The individual-physician model (MD7) with the highest edge roughness (mean ± SD: 0.106±0.016) demonstrated significantly lower volumetric Dice for test cases compared with other individual models (MD7: 58.5±15.8%, MD6: 67.1±16.8%, p < 0.001). An additional multiple-physician model trained after removing the MD7 data resulted in fewer outliers (e.g., Dice £ 40%: 4 cases for MD1-6, 7 cases for MD1-7, Ntotal=394).
This study demonstrates that there is significant variation in style and quality in manual segmentations in clinical care, and that training AI auto-segmentation algorithms from real-world, clinical datasets may result in unexpectedly under-performing algorithms with the inclusion of outliers. Importantly, this study provides a novel evaluation metric, edge roughness, to quantify physician variation in segmentation which will allow developers to filter clinical training data to optimize model performance.
Publisher
Research Square Platform LLC
Reference24 articles.
1. Cancer and radiation therapy: current advances and future directions;Baskar R;Int J Med Sci,2012
2. Assessment of consistency in contouring of normal-tissue anatomic structures;Collier DC;J Appl Clin Med Phys,2003
3. Machine learning for auto-segmentation in radiotherapy planning;Harrison K;Clin Oncol (R Coll Radiol),2022
4. Artificial intelligence in radiation oncology;Huynh E;Nat Rev Clin Oncol.,2020
5. Artificial intelligence in cancer imaging: Clinical challenges and applications;Bi WL;CA Cancer J Clin,2019