Affiliation:
1. Department of Radiological technology, Faculty of Health science, Juntendo University
2. Hosei University
3. Juntendo University
Abstract
Abstract
Background: Reducing the amount of projection data in computed tomography (CT), called sparse-view CT, can reduce the exposure dose; however, image artifacts can occur. We quantitatively evaluated the effects of conditional generative adversarial networks (CGAN) on image quality restoration for sparse-view CT using simulated sparse projection images and compared them with autoencoder (AE) and U-Net models.
Methods: To simulate sparse-view CT, we acquired fan-beam projections at rotation angles of 1°, 2°, 5°, and 10° for the chest images (4250 slices). Four types of sinograms with different decimating projections were generated to simulate sparse-view CTs. The AE, U-Net, and CGAN models were trained using pairs of artifacts and original images, with 90% of all data used for training and the remaining for evaluation. Restoration of CT value was evaluated using mean error (ME) and mean absolute error (MAE). The image quality was evaluated using structural image similarity (SSIM) and peak signal-to-noise ratio (PSNR).
Results: Organ structures were restored up to a sparse projection of 2°; however, slight deformation in tumor and spine regions was observed, with a dispersed projection of over 5°. Image resolution decreased, and blurring occurred in AE and U-Net; therefore, large deviations in ME and MAE were observed in lung and air regions, and the results of SSIM and PSNR were degraded.
Conclusions: The CGAN demonstrated higher image reproducibility than AE and U-Net, particularly for accurate CT value restoration. However, over a decimation angle of 5°, the accuracy of the reconstruction of exact organ structures is limited.
Publisher
Research Square Platform LLC