RCKD: Response-Based Cross-Task Knowledge Distillation for Pathological Image Analysis
-
Published:2023-11-02
Issue:11
Volume:10
Page:1279
-
ISSN:2306-5354
-
Container-title:Bioengineering
-
language:en
-
Short-container-title:Bioengineering
Author:
Kim Hyunil1, Kwak Tae-Yeong1ORCID, Chang Hyeyoon1ORCID, Kim Sun Woo1, Kim Injung2ORCID
Affiliation:
1. Deep Bio Inc., Seoul 08380, Republic of Korea 2. School of Computer Science and Electrical Engineering, Handong Global University, Pohang 37554, Republic of Korea
Abstract
We propose a novel transfer learning framework for pathological image analysis, the Response-based Cross-task Knowledge Distillation (RCKD), which improves the performance of the model by pretraining it on a large unlabeled dataset guided by a high-performance teacher model. RCKD first pretrains a student model to predict the nuclei segmentation results of the teacher model for unlabeled pathological images, and then fine-tunes the pretrained model for the downstream tasks, such as organ cancer sub-type classification and cancer region segmentation, using relatively small target datasets. Unlike conventional knowledge distillation, RCKD does not require that the target tasks of the teacher and student models be the same. Moreover, unlike conventional transfer learning, RCKD can transfer knowledge between models with different architectures. In addition, we propose a lightweight architecture, the Convolutional neural network with Spatial Attention by Transformers (CSAT), for processing high-resolution pathological images with limited memory and computation. CSAT exhibited a top-1 accuracy of 78.6% on ImageNet with only 3M parameters and 1.08 G multiply-accumulate (MAC) operations. When pretrained by RCKD, CSAT exhibited average classification and segmentation accuracies of 94.2% and 0.673 mIoU on six pathological image datasets, which is 4% and 0.043 mIoU higher than EfficientNet-B0, and 7.4% and 0.006 mIoU higher than ConvNextV2-Atto pretrained on ImageNet, respectively.
Funder
Korea Health Technology R&D Project through the Korea Health Industry Development Institute Ministry of Health & Welfare, Republic of Korea
Reference71 articles.
1. Imagenet classification with deep convolutional neural networks;Krizhevsky;Adv. Neural Inf. Process. Syst.,2012 2. Cireşan, D., Giusti, A., Gambardella, L., and Schmidhuber, J. (2013). Medical Image Computing and Computer-Assisted Intervention–MICCAI 2013: 16th International Conference, Nagoya, Japan, 22–26 September 2013, Proceedings, Part II 16, Springer. 3. Assessment of algorithms for mitosis detection in breast cancer histopathology images;Veta;Med. Image Anal.,2015 4. Araújo, T., Aresta, G., Castro, E., Rouco, J., Aguiar, P., Eloy, C., Polónia, A., and Campilho, A. (2017). Classification of breast cancer histology images using convolutional neural networks. PLoS ONE, 12. 5. Chen, H., Qi, X., Yu, L., and Heng, P. (2016, January 27–30). DCAN: Deep contour-aware networks for accurate gland segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|