Abstract
AbstractMedical imaging has played an essential role in medicine and disease diagnosis. Medical images from a single modality contain limited data about the organ. On the other hand, images from different modalities contain valuable structural and functio nal data about an organ. Medical image fusion (MIF) strategies integrate complementary information from two medical images captured using distinct modalities. This paper offered a new multimodal MIF approach using the parameter-adaptive pulse-coupled neural networks (PA-PCNN) within the non-subsampled contourlet transform (NSCT). The NSCT decomposes those images into high- and low-frequency bands. PA-PCNN combines those bands. The fused image was created using the inverse of the NSCT approach. To prove the proposed approach’s performance, we appoint a variety of medical images like computed tomography (CT), magnetic resonance imaging (MRI), single-photon emission CT (SPECT), and positron emission tomography (PET). Our experiments use five fusion metrics to validate the proposed approach’s performance, such as entropy (EN), mutual information (MI), weighted edge information (Q$$^{AB/F}$$
A
B
/
F
), nonlinear correlation information entropy (Q$$_{ncie}$$
ncie
), and average gradient (AG). Outcomes show that the proposed approach achieves high overall performance in visual and objective characteristics when compared with five well-known MIF methods. The average values for EN, MI, Q$$^{AB/F}$$
A
B
/
F
, Q$$_{ncie}$$
ncie
, and AG with the proposed approach are 5.2144,3.1282,.6600,.8071, and 8.9874, respectively.
Publisher
Springer Science and Business Media LLC
Subject
Computer Networks and Communications,Hardware and Architecture,Media Technology,Software