A Novel Multi-Dimensional Joint Search Method for the Compression of Medical Image Segmentation Models
-
Published:2024-08-23
Issue:9
Volume:10
Page:206
-
ISSN:2313-433X
-
Container-title:Journal of Imaging
-
language:en
-
Short-container-title:J. Imaging
Author:
Zheng Yunhui1ORCID, Wu Zhiyong1, Ji Fengna1, Du Lei1, Yang Zhenyu1
Affiliation:
1. School of Computer Science and Technology, Shandong University of Technology, Zibo 255049, China
Abstract
Due to the excellent results achieved by transformers in computer vision, more and more scholars have introduced transformers into the field of medical image segmentation. However, the use of transformers will make the model’s parameters very large, which occupies a large amount of the computer’s resources, making them very time-consuming during training. In order to alleviate this disadvantage, this paper explores a flexible and efficient search strategy that can find the best subnet from a continuous transformer network. The method is based on a learnable and uniform L1 sparsity constraint, which contains factors that reflect the global importance of the continuous search space in different dimensions, while the search process is simple and efficient, containing a single round of training. At the same time, in order to compensate for the loss of accuracy caused by the search, a pixel classification module is introduced into the model to compensate for the loss of accuracy in the model search process. Our experiments show that the model in this paper compresses 30% of the parameters and FLOPs used, while also showing a slight increase in the accuracy of the model on the Automatic Cardiac Diagnosis Challenge (ACDC) dataset.
Reference46 articles.
1. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., and Polosukhin, I. (2017, January 4–9). Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA. 2. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2020). An image is worth 16 × 16 words: Transformers for image recognition at scale. arXiv. 3. Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021, January 11–17). Swin transformer: Hierarchical vision transformer using shifted windows. Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual. 4. Shen, Z., Liu, Z., and Xing, E. (2022, January 23). Sliced recursive transformer. Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel. 5. Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., and Jégou, H. (2021, January 18–24). Training data-efficient image transformers & distillation through attention. Proceedings of the International Conference on Machine Learning, Virtual.
|
|