Affiliation:
1. Department of Biomedical Engineering Washington University in St. Louis St. Louis Missouri USA
2. Mallinckrodt Institute of Radiology Washington University in St. Louis St. Louis Missouri USA
Abstract
AbstractBackgroundArtificial intelligence‐based methods have generated substantial interest in nuclear medicine. An area of significant interest has been the use of deep‐learning (DL)‐based approaches for denoising images acquired with lower doses, shorter acquisition times, or both. Objective evaluation of these approaches is essential for clinical application.PurposeDL‐based approaches for denoising nuclear‐medicine images have typically been evaluated using fidelity‐based figures of merit (FoMs) such as root mean squared error (RMSE) and structural similarity index measure (SSIM). However, these images are acquired for clinical tasks and thus should be evaluated based on their performance in these tasks. Our objectives were to: (1) investigate whether evaluation with these FoMs is consistent with objective clinical‐task‐based evaluation; (2) provide a theoretical analysis for determining the impact of denoising on signal‐detection tasks; and (3) demonstrate the utility of virtual imaging trials (VITs) to evaluate DL‐based methods.MethodsA VIT to evaluate a DL‐based method for denoising myocardial perfusion SPECT (MPS) images was conducted. To conduct this evaluation study, we followed the recently published best practices for the evaluation of AI algorithms for nuclear medicine (the RELAINCE guidelines). An anthropomorphic patient population modeling clinically relevant variability was simulated. Projection data for this patient population at normal and low‐dose count levels (20%, 15%, 10%, 5%) were generated using well‐validated Monte Carlo‐based simulations. The images were reconstructed using a 3‐D ordered‐subsets expectation maximization‐based approach. Next, the low‐dose images were denoised using a commonly used convolutional neural network‐based approach. The impact of DL‐based denoising was evaluated using both fidelity‐based FoMs and area under the receiver operating characteristic curve (AUC), which quantified performance on the clinical task of detecting perfusion defects in MPS images as obtained using a model observer with anthropomorphic channels. We then provide a mathematical treatment to probe the impact of post‐processing operations on signal‐detection tasks and use this treatment to analyze the findings of this study.ResultsBased on fidelity‐based FoMs, denoising using the considered DL‐based method led to significantly superior performance. However, based on ROC analysis, denoising did not improve, and in fact, often degraded detection‐task performance. This discordance between fidelity‐based FoMs and task‐based evaluation was observed at all the low‐dose levels and for different cardiac‐defect types. Our theoretical analysis revealed that the major reason for this degraded performance was that the denoising method reduced the difference in the means of the reconstructed images and of the channel operator‐extracted feature vectors between the defect‐absent and defect‐present cases.ConclusionsThe results show the discrepancy between the evaluation of DL‐based methods with fidelity‐based metrics versus the evaluation on clinical tasks. This motivates the need for objective task‐based evaluation of DL‐based denoising approaches. Further, this study shows how VITs provide a mechanism to conduct such evaluations computationally, in a time and resource‐efficient setting, and avoid risks such as radiation dose to the patient. Finally, our theoretical treatment reveals insights into the reasons for the limited performance of the denoising approach and may be used to probe the effect of other post‐processing operations on signal‐detection tasks.
Cited by
11 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献