Abstract
Abstract
Accurate image quality assessment (IQA) is crucial to optimize computed tomography (CT) image protocols while keeping the radiation dose as low as reasonably achievable. In the medical domain, IQA is based on how well an image provides a useful and efficient presentation necessary for physicians to make a diagnosis. Moreover, IQA results should be consistent with radiologists’ opinions on image quality, which is accepted as the gold standard for medical IQA. As such, the goals of medical IQA are greatly different from those of natural IQA. In addition, the lack of pristine reference images or radiologists’ opinions in a real-time clinical environment makes IQA challenging. Thus, no-reference IQA (NR-IQA) is more desirable in clinical settings than full-reference IQA (FR-IQA). Leveraging an innovative self-supervised training strategy for object detection models by detecting virtually inserted objects with geometrically simple forms, we propose a novel NR-IQA method, named deep detector IQA (D2IQA), that can automatically calculate the quantitative quality of CT images. Extensive experimental evaluations on clinical and anthropomorphic phantom CT images demonstrate that our D2IQA is capable of robustly computing perceptual image quality as it varies according to relative dose levels. Moreover, when considering the correlation between the evaluation results of IQA metrics and radiologists’ quality scores, our D2IQA is marginally superior to other NR-IQA metrics and even shows performance competitive with FR-IQA metrics.
Funder
Korea Medical Device Development Fund
National Research Foundation of Korea
Electronics and Telecommunications Research Institute (ETRI) grant
Technology development Program of MSS
Subject
Artificial Intelligence,Human-Computer Interaction,Software
Cited by
12 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献