Segmentation versus detection: Development and evaluation of deep learning models for prostate imaging reporting and data system lesions localisation on Bi‐parametric prostate magnetic resonance imaging

Author:

Min Zhe12ORCID,Bianco Fernando J.3,Yang Qianye2,Yan Wen24,Shen Ziyi2,Cohen David3,Rodell Rachael2,Barratt Dean C.2,Hu Yipeng2

Affiliation:

1. School of Control Science and Engineering Shandong University Jinan China

2. Centre for Medical Image Computing and Wellcome/EPSRC Centre for Interventional & Surgical Sciences University College London London UK

3. Urological Research Network Miami Lakes Florida USA

4. City University of Hong Kong Hong Kong China

Abstract

AbstractAutomated prostate cancer detection in magnetic resonance imaging (MRI) scans is of significant importance for cancer patient management. Most existing computer‐aided diagnosis systems adopt segmentation methods while object detection approaches recently show promising results. The authors have (1) carefully compared performances of most‐developed segmentation and object detection methods in localising prostate imaging reporting and data system (PIRADS)‐labelled prostate lesions on MRI scans; (2) proposed an additional customised set of lesion‐level localisation sensitivity and precision; (3) proposed efficient ways to ensemble the segmentation and object detection methods for improved performances. The ground‐truth (GT) perspective lesion‐level sensitivity and prediction‐perspective lesion‐level precision are reported, to quantify the ratios of true positive voxels being detected by algorithms over the number of voxels in the GT labelled regions and predicted regions. The two networks are trained independently on 549 clinical patients data with PIRADS‐V2 as GT labels, and tested on 161 internal and 100 external MRI scans. At the lesion level, nnDetection outperforms nnUNet for detecting both PIRADS ≥ 3 and PIRADS ≥ 4 lesions in majority cases. For example, at the average false positive prediction per patient being 3, nnDetection achieves a greater Intersection‐of‐Union (IoU)‐based sensitivity than nnUNet for detecting PIRADS ≥ 3 lesions, being 80.78% ± 1.50% versus 60.40% ± 1.64% (p < 0.01). At the voxel level, nnUnet is in general superior or comparable to nnDetection. The proposed ensemble methods achieve improved or comparable lesion‐level accuracy, in all tested clinical scenarios. For example, at 3 false positives, the lesion‐wise ensemble method achieves 82.24% ± 1.43% sensitivity versus 80.78% ± 1.50% (nnDetection) and 60.40% ± 1.64% (nnUNet) for detecting PIRADS ≥ 3 lesions. Consistent conclusions are also drawn from results on the external data set.

Funder

National Natural Science Foundation of China

Wellcome / EPSRC Centre for Interventional and Surgical Sciences

Publisher

Institution of Engineering and Technology (IET)

Reference43 articles.

1. Cancer statistics for the year 2020: An overview

2. Deep Attentive Panoptic Model for Prostate Cancer Detection Using Biparametric MRI Scans

3. Prostattention‐net: a deep attention model for prostate cancer segmentation by aggressiveness in mri scans;Duran A.;Med. Image Anal.,2022

4. End‐to‐end prostate cancer detection in bpmri via 3d cnns: effect of attention mechanisms, clinical priori and decoupled false positive reduction;Saha A.;arXiv preprint arXiv:2101.03244,2021

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3