Author:
Ye En Zhou,Ye En Hui,Ye Run Zhou
Abstract
ABSTRACTIntroductionAnalysis of multimodal medical images often requires the selection of one or many anatomical regions of interest (ROIs) for extraction of useful statistics. This task can prove laborious when a manual approach is used. We have previously developed a user-friendly software tool for image-to-image translation using deep learning. Therefore, we present herein an update to the DeepImageTranslator software with the addiction of a tool for multimodal medical image segmentation analysis (hereby referred to as the MMMISA).MethodsThe MMMISA was implemented using the Tkinter library; backend computations were implemented using the Pydicom, Numpy, and OpenCV libraries. We tested our software using 4188 whole-body axial 2-deoxy-2-[18F]-fluoroglucose-position emission tomography/computed tomography ([18F]-FDG-PET/CT) slices of 10 patients from the ACRIN-HNSCC (American College of Radiology Imaging Network-Head and Neck Squamous Cell Carcinoma) database. Using the deep learning software DeepImageTranslator, a model was trained with 36 randomly selected CT slices and manually labelled semantic segmentation maps. Utilizing the trained model, all the CT scans of the 10 HNSCC patients were segmented with high accuracy. Segmentation maps generated using the deep convolutional network were then used to measure organ specific [18F]-FDG uptake. We also compared measurements performed using the MMMISA and those made with manually selected ROIs.ResultsThe MMMISA is a tool that allows user to select ROIs based on deep learning-generated segmentation maps and to compute accurate statistics for these ROIs based on coregistered multimodal images. We found that organ-specific [18F]-FDG uptake measured using multiple manually selected ROIs is concordant with whole-tissue measurements made with segmentation maps using the MMMISA tool.
Publisher
Cold Spring Harbor Laboratory
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献