Affiliation:
1. School of Software Engineering Xi'an Jiaotong University Xi'an China
2. Institute of Artificial Intelligence and Robotics Xi'an Jiaotong University Xi'an China
3. School of Automation Science and Engineering Xi'an Jiaotong University Xi'an China
Abstract
AbstractBackgroundAccurate segmentation of organs has a great significance for clinical diagnosis, but it is still hard work due to the obscure imaging boundaries caused by tissue adhesion on medical images. Based on the image continuity in medical image volumes, segmentation on these slices could be inferred from adjacent slices with a clear organ boundary. Radiologists can delineate a clear organ boundary by observing adjacent slices.PurposeInspired by the radiologists' delineating procedure, we design an organ segmentation model based on boundary information of adjacent slices and a human–machine interactive learning strategy to introduce clinical experience.MethodsWe propose an interactive organ segmentation method for medical image volume based on Graph Convolution Network (GCN) called Surface‐GCN. First, we propose a Surface Feature Extraction Network (SFE‐Net) to capture surface features of a target organ, and supervise it by a Mini‐batch Adaptive Surface Matching (MBASM) module. Then, to predict organ boundaries precisely, we design an automatic segmentation module based on a Surface Convolution Unit (SCU), which propagates information on organ surfaces to refine the generated boundaries. In addition, an interactive segmentation module is proposed to learn radiologists' experience of interactive corrections on organ surfaces to reduce interaction clicks.ResultsWe evaluate the proposed method on one prostate MR image dataset and two abdominal multi‐organ CT datasets. The experimental results show that our method outperforms other state‐of‐the‐art methods. For prostate segmentation, the proposed method conducts a DSC score of 94.49% on PROMISE12 test dataset. For abdominal multi‐organ segmentation, the proposed method achieves DSC scores of 95, 91, 95, and 88% for the left kidney, gallbladder, spleen, and esophagus, respectively. For interactive segmentation, the proposed method reduces 5–10 interaction clicks to reach the same accuracy.ConclusionsTo overcome the medical organ segmentation challenge, we propose a Graph Convolutional Network called Surface‐GCN by imitating radiologist interactions and learning clinical experience. On single and multiple organ segmentation tasks, the proposed method could obtain more accurate segmentation boundaries compared with other state‐of‐the‐art methods.
Funder
National Natural Science Foundation of China