Affiliation:
1. University of Connecticut
Abstract
Network pruning is a widely used technique to reduce computation cost and model size for deep neural networks. However, the typical three-stage pipeline, i.e., training, pruning and retraining (fine-tuning) significantly increases the overall training trails. In this paper, we develop a systematic weight-pruning optimization approach based on Surrogate Lagrangian relaxation (SLR), which is tailored to overcome difficulties caused by the discrete nature of the weight-pruning problem while ensuring fast convergence.
We further accelerate the convergence of the SLR by using quadratic penalties. Model parameters obtained by SLR during the training phase are much closer to their optimal values as compared to those obtained by other state-of-the-art methods. We evaluate the proposed method on image classification tasks using CIFAR-10 and ImageNet, as well as object detection tasks using COCO 2014 and Ultra-Fast-Lane-Detection using TuSimple lane detection dataset. Experimental results demonstrate that our SLR-based weight-pruning optimization approach achieves higher compression rate than state-of-the-arts under the same accuracy requirement. It also achieves a high model accuracy even at the hard-pruning stage without retraining (reduces the traditional three-stage pruning to two-stage). Given a limited budget of retraining epochs, our approach quickly recovers the model accuracy.
Publisher
International Joint Conferences on Artificial Intelligence Organization
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Sparsifying Graph Neural Networks with Compressive Sensing;Proceedings of the Great Lakes Symposium on VLSI 2024;2024-06-12
2. PruneGNN: Algorithm-Architecture Pruning Framework for Graph Neural Network Acceleration;2024 IEEE International Symposium on High-Performance Computer Architecture (HPCA);2024-03-02
3. Physics-aware Roughness Optimization for Diffractive Optical Neural Networks;2023 60th ACM/IEEE Design Automation Conference (DAC);2023-07-09
4. Acceleration-aware, Retraining-free Evolutionary Pruning for Automated Fitment of Deep Learning Models on Edge Devices;Proceedings of the Second International Conference on AI-ML Systems;2022-10-12
5. Towards Sparsification of Graph Neural Networks;2022 IEEE 40th International Conference on Computer Design (ICCD);2022-10