Can input reconstruction be used to directly estimate uncertainty of a dose prediction U‐Net model?

Author:

Huet‐Dastarac Margerie1,Nguyen Dan2,Longton Eleonore3,Jiang Steve2,Lee John1,Montero Ana Barragán1

Affiliation:

1. Molecular Imaging, Radiation and Oncology (MIRO) Laboratory Institut de Recherche Expérimentale et Clinique (IREC) UCLouvain Belgium

2. Medical Artificial Intelligence and Automation (MAIA) Laboratory Department of Radiation Oncology UT Southwestern Medical Center Dallas USA

3. Department of Radiotherapy Cliniques Universitaires Saint‐Luc Brussels Belgium

Abstract

AbstractBackgroundThe reliable and efficient estimation of uncertainty in artificial intelligence (AI) models poses an ongoing challenge in many fields such as radiation therapy. AI models are intended to automate manual steps involved in the treatment planning workflow. We focus in this study on dose prediction models that predict an optimal dose trade‐off for each new patient for a specific treatment modality. They can guide physicians in the optimization, be part of automatic treatment plan generation or support decision in treatment indication.Most common uncertainty estimation methods are based on Bayesian approximations, like Monte Carlo dropout (MCDO) or Deep ensembling (DE). These two techniques, however, have a high inference time (i.e., require multiple inference passes) and might not work for detecting out‐of‐distribution (OOD) data (i.e., overlapping uncertainty estimate for in‐distribution (ID) and OOD).PurposeIn this study, we present a direct uncertainty estimation method and apply it for a dose prediction U‐Net architecture. It can be used to flag OOD data and give information on the quality of the dose prediction.MethodsOur method consists in the addition of a branch decoding from the bottleneck which reconstructs the CT scan given as input. The input reconstruction error can be used as a surrogate of the model uncertainty. For the proof‐of‐concept, our method is applied to proton therapy dose prediction in head and neck cancer patients. A dataset of 60 oropharyngeal patients was used to train the network using a nested cross‐validation approach with 11 folds (training: 50 patients, validation: 5 patients, test: 5 patients). For the OOD experiment, we used 10 extra patients with a different head and neck sub‐location. Accuracy, time‐gain, and OOD detection are analyzed for our method in this particular application and compared with the popular MCDO and DE.ResultsThe additional branch did not reduce the accuracy of the dose prediction model. The median absolute error is close to zero for the target volumes and less than 1% of the dose prescription for organs at risk. Our input reconstruction method showed a higher Pearson correlation coefficient with the prediction error (0.620) than DE (0.447) and MCDO (between 0.599 and 0.612). Moreover, our method allows an easier identification of OOD (no overlap for ID and OOD data and a Z‐score of 34.05). The uncertainty is estimated simultaneously to the regression task, therefore requires less time and computational resources.ConclusionsThis study shows that the error in the CT scan reconstruction can be used as a surrogate of the uncertainty of the model. The Pearson correlation coefficient with the dose prediction error is slightly higher than state‐of‐the‐art techniques. OOD data can be more easily detected and the uncertainty metric is computed simultaneously to the regression task, therefore faster than MCDO or DE. The code and pretrained model are available on the gitlab repository: https://gitlab.com/ai4miro/ct‐reconstruction‐for‐uncertainty‐quatification‐of‐hdunet

Funder

Fédération Wallonie-Bruxelles

Publisher

Wiley

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3