Abstract
AbstractTransfer learning (TL) is an alternative approach to the full training of deep learning (DL) models from scratch and can transfer knowledge gained from large-scale data to solve different problems. ImageNet, which is a publicly available large-scale dataset, is a commonly used dataset for TL-based image analysis; many studies have applied pre-trained models from ImageNet to clinical prediction tasks and have reported promising results. However, some have questioned the effectiveness of using ImageNet, which consists solely of natural images, for medical image analysis. The aim of this study was to evaluate whether pre-trained models using RadImageNet, which is a large-scale medical image dataset, could achieve superior performance in classification tasks in dental imaging modalities compared with ImageNet pre-trained models. To evaluate the classification performance of RadImageNet and ImageNet pre-trained models for TL, two dental imaging datasets were used. The tasks were (1) classifying the presence or absence of supernumerary teeth from a dataset of panoramic radiographs and (2) classifying sex from a dataset of lateral cephalometric radiographs. Performance was evaluated by comparing the area under the curve (AUC). On the panoramic radiograph dataset, the RadImageNet models gave average AUCs of 0.68 ± 0.15 (p < 0.01), and the ImageNet models had values of 0.74 ± 0.19. In contrast, on the lateral cephalometric dataset, the RadImageNet models demonstrated average AUCs of 0.76 ± 0.09, and the ImageNet models achieved values of 0.75 ± 0.17. The difference in performance between RadImageNet and ImageNet models in TL depends on the dental image dataset used.
Publisher
Springer Science and Business Media LLC