Abstract
With the continuous advancement of medical imaging technology, a vast amount of multi-modal medical image data has been extensively utilized for disease diagnosis, treatment, and research. Effective management and utilization of these data becomes a pivotal challenge, particularly when undertaking image matching and retrieval. Although numerous methods for medical image matching and retrieval exist, they primarily rely on traditional image processing techniques, often limited to manual feature extraction and singular modality handling. To address these limitations, this study introduces an algorithm for medical image matching grounded in multi-task learning, further investigating a semantic-enhanced technique for cross-modal medical image retrieval. By deeply exploring complementary semantic information between different modality medical images, these methods offer novel perspectives and tools for the domain of medical image matching and retrieval.
Publisher
International Information and Engineering Technology Association
Subject
Electrical and Electronic Engineering