Affiliation:
1. School of Communications and Information Engineering Xi'an University of Posts and Telecommunications Shaanxi China
2. China Pat Intellectual Property Office Shaanxi China
3. School of Informatics Xiamen University Fujian China
Abstract
AbstractIn the field of single image super‐resolution, the prevalent use of convolutional neural networks (CNN) typically assumes a simplistic bicubic downsampling model for image degradation. This assumption misaligns with the complex degradation processes encountered in medical imaging, leading to a performance gap when these algorithms are applied to real medical scenarios. Addressing this critical discrepancy, our study introduces a novel degradation comparative learning framework meticulously designed for the nuanced degradation characteristics of medical images within the Internet of Medical Things (IoMT). Unlike traditional CNN‐based super‐resolution approaches that homogeneously process image channels, our method acknowledges and leverages the disparity in informational content across channels. We present a blind image super‐resolution technique, underpinned by edge reconstruction and an innovative image feature supplement module. This approach not only preserves but enriches texture details, crucial for the accurate analysis of medical images in the IoMT. Comparative evaluations of our model against existing blind super‐resolution methods, utilizing both natural image testing datasets and medical images, demonstrate its superior performance. Notably, our approach exhibits remarkable proficiency in stably restoring various degraded super‐resolution images, a critical requirement in the IoMT context. Experimental results demonstrate that our method is superior to the current state‐of‐the‐art methods, marking a significant advancement in the field of medical image super‐resolution.
Funder
National Natural Science Foundation of China
Natural Science Foundation of Shaanxi Province