L1RR: Model Pruning Using Dynamic and Self-Adaptive Sparsity for Remote-Sensing Target Detection to Prevent Target Feature Loss
-
Published:2024-06-05
Issue:11
Volume:16
Page:2026
-
ISSN:2072-4292
-
Container-title:Remote Sensing
-
language:en
-
Short-container-title:Remote Sensing
Author:
Ran Qiong1, Li Mengwei1, Zhao Boya2ORCID, He Zhipeng23, Wu Yuanfeng2ORCID
Affiliation:
1. College of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100029, China 2. Key Laboratory of Computational Optical Imaging Technology, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China 3. School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
Abstract
Limited resources for edge computing platforms in airborne and spaceborne imaging payloads prevent using complex image processing models. Model pruning can eliminate redundant parameters and reduce the computational load, enhancing processing efficiency on edge computing platforms. Current challenges in model pruning for remote-sensing object detection include the risk of losing target features, particularly during sparse training and pruning, and difficulties in maintaining channel correspondence for residual structures, often resulting in retaining redundant features that compromise the balance between model size and accuracy. To address these challenges, we propose the L1 reweighted regularization (L1RR) pruning method. Leveraging dynamic and self-adaptive sparse modules, we optimize L1 sparsity regularization, preserving the model’s target feature information using a feature attention loss mechanism to determine appropriate pruning ratios. Additionally, we propose a residual reconstruction procedure, which removes redundant feature channels from residual structures while maintaining the residual inference structure through output channel recombination and input channel recombination, achieving a balance between model size and accuracy. Validation on two remote-sensing datasets demonstrates significant reductions in parameters and floating point operations (FLOPs) of 77.54% and 65%, respectively, and a 48.5% increase in the inference speed on the Jetson TX2 platform. This framework optimally maintains target features and effectively distinguishes feature channel importance compared to other methods, significantly enhancing feature channel robustness for difficult targets and expanding pruning applicability to less difficult targets.
Funder
National Key R&D Program of China
Reference45 articles.
1. Progress and challenges in intelligent remote sensing satellite systems;Zhang;IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.,2022 2. Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., and Zhang, C. (2017, January 22–29). Learning efficient convolutional networks through network slimming. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Venice, Italy. 3. Ioffe, S., and Szegedy, C. (2015, January 6–11). Batch normalization: Accelerating deep network training by reducing internal covariate shift. Proceedings of the International Conference on Machine Learning (ICML), Lille, France. 4. Wang, J., Cui, Z., Zang, Z., Meng, X., and Cao, Z. (2022). Absorption Pruning of Deep Neural Network for Object Detection in Remote Sensing Imagery. Remote Sens., 14. 5. Fu, Y., Zhou, Y., Yuan, X., Wei, L., Bing, H., and Zhang, Y. (2022, January 26–28). Efficient Esophageal Lesion Detection using Polarization Regularized Network Slimming. Proceedings of the 2022 IEEE 8th International Conference on Cloud Computing and Intelligent Systems (CCIS), Chengdu, China.
|
|