Affiliation:
1. Department of Biomedical Imaging Faculty of Medicine Universiti Malaya Kuala Lumpur Malaysia
2. Universiti Malaya Research Imaging Centre (UMRIC) Faculty of Medicine Universiti Malaya Kuala Lumpur Malaysia
3. Division of Radiological Sciences Singapore General Hospital Bukit Merah Singapore
4. Department of Vascular and Interventional Radiology Singapore General Hospital Bukit Merah Singapore
Abstract
AbstractBackgroundFluoroscopy guided interventions (FGIs) pose a risk of prolonged radiation exposure; personalized patient dosimetry is necessary to improve patient safety during these procedures. However, current FGIs systems do not capture the precise exposure regions of the patient, making it challenging to perform patient‐procedure‐specific dosimetry. Thus, there is a pressing need to develop approaches to extract and use this information to enable personalized radiation dosimetry for interventional procedures.PurposeTo propose a deep learning (DL) approach for the automatic localization of 3D anatomical landmarks on randomly collimated and magnified 2D head fluoroscopy images.Materials and methodsThe model was developed with datasets comprising 800 000 pseudo 2D synthetic images (mixture of vessel‐enhanced and non‐enhancement), each with 55 annotated anatomical landmarks (two are landmarks for eye lenses), generated from 135 retrospectively collected head computed tomography (CT) volumetric data. Before training, dynamic random cropping was performed to mimic the varied field‐size collimation in FGI procedures. Gaussian‐distributed additive noise was applied to each individual image to enhance the robustness of the DL model in handling image degradation that may occur during clinical image acquisition in a clinical environment. The model was trained with 629 370 synthetic images for approximately 275 000 iterations and evaluated against a synthetic image test set and a clinical fluoroscopy test set.ResultsThe model shows good performance in estimating in‐ and out‐of‐image landmark positions and shows feasibility to instantiate the skull shape. The model successfully detected 96.4% and 92.5% 2D and 3D landmarks, respectively, within a 10 mm error on synthetic test images. It demonstrated an average of 3.6 ± 2.3 mm mean radial error and successfully detected 96.8% 2D landmarks within 10 mm error on clinical fluoroscopy images.ConclusionOur deep‐learning model successfully localizes anatomical landmarks and estimates the gross shape of skull structures from collimated 2D projection views. This method may help identify the exposure region required for patient‐specific organ dosimetry in FGIs procedures.