Affiliation:
1. Department of Computational Intelligence, SRM IST, Kattankulathur 603203, Tamil Nadu, India
Abstract
Multi-focus images can be fused by the deep learning (DL) approach. Initially, multi-focus image fusion (MFIF) is used to perform the classification task. The classifier of the convolutional neural network (CNN) is implemented to determine whether the pixel is defocused or focused. The lack of available data to train the system is one of the demerits of the MFIF methodology. Instead of using MFIF, the unsupervised model of the DL approach is affordable and appropriate for image fusion. By establishing a framework of feature extraction, fusion, and reconstruction, we generate a Deep CNN of [Formula: see text] End-to-End Unsupervised Model. It is defined as a Siamese Multi-Scale feature extraction model. It can extract only three different source images of the same scene, which is the major disadvantage of the system. Due to the possibility of low intensity and blurred images, considering only three source images may lead to poor performance. The main objective of the work is to consider [Formula: see text] parameters to define [Formula: see text] source images. Many existing systems are compared to the proposed method for extracting features from images. Experimental results of various approaches show that Enhanced Siamese Multi-Scale feature extraction used along with Structure Similarity Measure (SSIM) produces an excellent fused image. It is determined by undergoing quantitative and qualitative studies. The analysis is done based on objective examination and visual traits. By increasing the parameters, the objective assessment increases in performance rate and complexity with time.
Publisher
World Scientific Pub Co Pte Ltd
Subject
Computer Science Applications,Theoretical Computer Science,Software
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献