Author:
Inkinen S. I.,Kotiaho A. O.,Hanni M.,Nieminen M. T.,Brix M. A. K.
Abstract
AbstractImage artefacts in computed tomography (CT) limit the diagnostic quality of the images. The objective of this proof-of-concept study was to apply deep learning (DL) for automated CT artefact classification. Openly available Head CT data from Johns Hopkins University was used. Three common artefacts (patient movement, beam hardening, and ring artefacts (RAs)) and artefact free images were simulated using 2D axial slices. Simulated data were split into a training set (Ntrain = 1040 × 4(4160)), two validation sets (Nval1 = 130 × 4(520) and Nval2 = 130 × 4(520)), and a separate test set (Ntest = 201 × 4(804); two individual subjects). VGG-16 model architecture was used as a DL classifier, and the Grad-CAM approach was used to produce attention maps. Model performance was evaluated using accuracy, average precision, area under the receiver operating characteristics (ROC) curve, precision, recall, and F1-score. Sensitivity analysis was performed for two test set slice images in which different RA radiuses (4 pixels to 245) and movement artefacts, i.e., head tilt with rotation angles (0.2° to 3°), were generated. Artefact classification performance was excellent on the test set, as accuracy, average precision, and ROC area under curve over all classes were 0.91, 0.86, and 0.99, respectively. The precision, recall, and F1-scores were over 0.84, 0.71, and 0.85 for all class-wise cases. Sensitivity analysis revealed that the model detected movement at all rotation angles, yet it failed to detect the smallest RAs (4-pixel radius). DL can be used for effective detection of CT artefacts. In future, DL could be applied for automated quality assurance of clinical CT.
Publisher
Springer Nature Switzerland