Author:
Ahmed Ali, ,Almagrabi Alaa Omran,Osman Ahmed Hamza, ,
Abstract
Content-based image retrieval (CBIR) is a recent method used to retrieve different types of images from repositories. The traditional content-based medical image retrieval (CBMIR) methods commonly used low-level image representation features extracted from color, texture, and shape image descriptors. Since most of these CBMIR systems depend mainly on the extracted features, the methods used in the feature extraction phase are more important. Features extraction methods, which generate inaccurate features, lead to very poor performance retrieval because of semantic gap widening. Hence, there is high demand for independent domain knowledge features extraction methods, which have automatic learning capabilities from input images. Pre-trained deep convolution neural networks (CNNs), the recent generation of deep learning neural networks, could be used to extract expressive and accurate features. The main advantage of these pre-trained CNNs models is the pre-training process for huge image data of thousands of different classes, and their knowledge after the training process could easily be transferred. There are many successful models of pre-trained CNNs models used in the area of medical image retrieval, image classification, and object recognition. This study utilizes two of the most known pre-trained CNNs models; ResNet18 and SqueezeNet for the offline feature extraction stage. Additionally, the highly accurate features extracted from medical images are used for the CBMIR method of medical image retrieval. This study uses two popular medical image datasets; Kvasir and PH2 to show that the proposed methods have good retrieval results. The retrieval performance evaluation measures of our proposed method have average precision of 97.75% and 83.33% for Kvasir and PH2 medical images respectively, and outperform some of the state-of-the-art methods in this field of study because these pre-trained CNNs have well trained layers among a huge number of image types. Finally, intensive statistical analysis shows that the proposed ResNet18-based retrieval method has the best performance for enhancing both recall and precision measures for both medical images.
Publisher
International Journal of Advanced and Applied Sciences
Reference83 articles.
1. Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M, and Ghemawat S (2016). TensorFlow: Large-scale machine learning on heterogeneous distributed systems. ArXiv Preprint ArXiv:1603.0446. https://doi.org/10.48550/arXiv.1603.04467
2. Abioui H, Idarrou A, Bouzit A, and Mammass D (2018). Automatic image annotation for semantic image retrieval. In the International Conference on Image and Signal Processing, Springer, Cherbourg, France: 129-137.
3. Ahmed A (2020). Implementing relevance feedback for content-based medical image retrieval. IEEE Access, 8: 79969-79976.
4. Ahmed A (2021). Pre-trained CNNs Models for content based image retrieval'. International Journal of Advanced Computer Science and Applications, 12(7): 200-206.
5. Ahmed A and El Sadig OB (2019). Heterogeneous multi-classifier method based on weighted voting for Breast Cancer Detection. International Journal on Advanced Science, Engineering and Information Technology, 7(4): 36-41.
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献