Abstract
PurposeAt airport security checkpoints, baggage screening is aimed to prevent transportation of prohibited and potentially dangerous items. Observing the projection images generated by X-rays scanner is a critical method. However, when multiple objects are stacked on top of each other, distinguishing objects only by a two-dimensional picture is difficult, which prompts the demand for more precise imaging technology to be investigated for use. Reconstructing from 2D X-ray images to 3D-computed tomography (CT) volumes is a reliable solution.Design/methodology/approachTo more accurately distinguish the specific contour shape of items when stacked, multi-information fusion network (MFCT-GAN) based on generative adversarial network (GAN) and U-like network (U-NET) is proposed to reconstruct from two biplanar orthogonal X-ray projections into 3D CT volumes. The authors use three modules to enhance the reconstruction qualitative and quantitative effects, compared with the original network. The skip connection modification (SCM) and multi-channels residual dense block (MRDB) enable the network to extract more feature information and learn deeper with high efficiency; the introduction of subjective loss enables the network to focus on the structural similarity (SSIM) of images during training.FindingsOn account of the fusion of multiple information, MFCT-GAN can significantly improve the value of quantitative indexes and distinguish contour explicitly between different targets. In particular, SCM enables features more reasonable and accurate when expanded into three dimensions. The appliance of MRDB can alleviate problem of slow optimization during the late training period, as well as reduce the computational cost. The introduction of subjective loss guides network to retain more high-frequency information, which makes the rendered CT volumes clearer in details.Originality/valueThe authors' proposed MFCT-GAN is able to restore the 3D shapes of different objects greatly based on biplanar projections. This is helpful in security check places, where X-ray images of stacked objects need to be distinguished from the presence of prohibited objects. The authors adopt three new modules, SCM, MRDB and subjective loss, as well as analyze the role the modules play in 3D reconstruction. Results show a significant improvement on the reconstruction both in objective and subjective effects.
Reference43 articles.
1. Energy-selective reconstructions in X-ray computerised tomography;Physics in Medicine and Biology,1976
2. The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans;Medical Physics,2011
3. Object-based 3D X-ray imaging for second-line security screening;European Convention on Security and Detection,1995
4. Skeletal 3-D CT: advantages of volume rendering over surface rendering;Skeletal Radiology,1996
5. The importance of skip connections in biomedical image segmentation;Deep Learning and Data Labeling for Medical Applications,2016
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. CLCT-GAN: Strong-Weak Contrastive Learning for Reconstructing CT Images from Radiographs;2024 International Joint Conference on Neural Networks (IJCNN);2024-06-30
2. 3DSP-GAN: A 3D-to-3D Network for CT Reconstruction from Biplane X-rays;2024 IEEE 7th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC);2024-03-15
3. 3D Image Generation from X-Ray Projections Using Generative Adversarial Networks;2023 IEEE 23rd International Conference on Bioinformatics and Bioengineering (BIBE);2023-12-04
4. Learning Deep Intensity Field for Extremely Sparse-View CBCT Reconstruction;Lecture Notes in Computer Science;2023