Author:
Aboian Mariam,Bousabarah Khaled,Kazarian Eve,Zeevi Tal,Holler Wolfgang,Merkaj Sara,Cassinelli Petersen Gabriel,Bahar Ryan,Subramanian Harry,Sunku Pranay,Schrickel Elizabeth,Bhawnani Jitendra,Zawalich Mathew,Mahajan Amit,Malhotra Ajay,Payabvash Sam,Tocino Irena,Lin MingDe,Westerhoff Malte
Abstract
PurposePersonalized interpretation of medical images is critical for optimum patient care, but current tools available to physicians to perform quantitative analysis of patient’s medical images in real time are significantly limited. In this work, we describe a novel platform within PACS for volumetric analysis of images and thus development of large expert annotated datasets in parallel with radiologist performing the reading that are critically needed for development of clinically meaningful AI algorithms. Specifically, we implemented a deep learning-based algorithm for automated brain tumor segmentation and radiomics extraction, and embedded it into PACS to accelerate a supervised, end-to- end workflow for image annotation and radiomic feature extraction.Materials and methodsAn algorithm was trained to segment whole primary brain tumors on FLAIR images from multi-institutional glioma BraTS 2021 dataset. Algorithm was validated using internal dataset from Yale New Haven Health (YHHH) and compared (by Dice similarity coefficient [DSC]) to radiologist manual segmentation. A UNETR deep-learning was embedded into Visage 7 (Visage Imaging, Inc., San Diego, CA, United States) diagnostic workstation. The automatically segmented brain tumor was pliable for manual modification. PyRadiomics (Harvard Medical School, Boston, MA) was natively embedded into Visage 7 for feature extraction from the brain tumor segmentations.ResultsUNETR brain tumor segmentation took on average 4 s and the median DSC was 86%, which is similar to published literature but lower than the RSNA ASNR MICCAI BRATS challenge 2021. Finally, extraction of 106 radiomic features within PACS took on average 5.8 ± 0.01 s. The extracted radiomic features did not vary over time of extraction or whether they were extracted within PACS or outside of PACS. The ability to perform segmentation and feature extraction before radiologist opens the study was made available in the workflow. Opening the study in PACS, allows the radiologists to verify the segmentation and thus annotate the study.ConclusionIntegration of image processing algorithms for tumor auto-segmentation and feature extraction into PACS allows curation of large datasets of annotated medical images and can accelerate translation of research into development of personalized medicine applications in the clinic. The ability to use familiar clinical tools to revise the AI segmentations and natively embedding the segmentation and radiomic feature extraction tools on the diagnostic workstation accelerates the process to generate ground-truth data.
Funder
National Institutes of Health
Foundation of the American Society of Neuroradiology
Doris Duke Charitable Foundation
Cited by
17 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献