Affiliation:
1. Institute of Computing Technology, Chinese Academy of Sciences, China and University of Chinese Academy of Sciences, Beijing, China
2. Zhejiang Lab, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
3. Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
Abstract
The efficiency of deep neural network (DNN) solutions on real hardware devices are mainly decided by the DNN architecture and the compiler-level scheduling strategy on the hardware. When we try to fully exploit the underlying hardware and obtain the optimal tradeoff between DNN accuracy and runtime performance, we discovered that the two optimization goals of DNN architecture and scheduling policy are intimately related to each other. However, current hardware-aware Neural Architecture Search (NAS) methods primarily focus on the DNN architecture search process, ignoring the effects of various compiler-level scheduling strategies (e.g., graph-level optimization, loop transformations, parallelization, etc.) on network candidates being evaluated in the search process. As a result, they may overlook the true-optimal DNN implementations on hardware, which can only be discovered by trying-out different combinations of scheduling strategies and DNN architectures. This work proposes a NAS framework (CHaNAS) that searches for not only the network architecture but also the dedicated compiler-level scheduling policy, as the optimal co-design solution on the target hardware. We propose to use a block-based pre-scheduling methodology to reduce the co-design search space and enable the automatic generation of the optimal co-design, including the network architecture and the tensor programs that practice the scheduling policy. Further, we introduce a new search objective function based on the generalization gap to prevent the selection of architectures that are prone to overfitting. We evaluate CHaNAS on Imagenet on different hardware back-ends against the state-of-the-art hardware-aware search method based on the MobileNet-v3 search space. Experimental results show that the co-design solutions obtained by ChaNAS show up to 1.6×, 1.9×, and 1.7×, 24 performance boost on NVIDIA P100 GPU, Intel Xeon 8163 CPU, and Samsung Note 10 Mobile, respectively, over the baselines of the same-level accuracy.
Funder
National Natural Science Foundation of China
Strategic Priority Research Program of Chinese Academy of Science
2025 Key Technology Innovation Program of Ningbo City
Publisher
Association for Computing Machinery (ACM)
Subject
Hardware and Architecture,Software
Reference59 articles.
1. 2020. Google XLA. Retrieved from https://www.tensorflow.org/xla.
2. 2020. Inter MKL-DNN. Retrieved from https://github.com/intel/mkl-dnn.
3. 2020. NVIDIA CUBLAS Library. Retrieved from https://www.nvidia.com/.
4. Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2017. Tensorflow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI’17). 265–283.
5. Best of Both Worlds: AutoML Codesign of a CNN and its Hardware Accelerator
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. CIM-MLC: A Multi-level Compilation Stack for Computing-In-Memory Accelerators;Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2;2024-04-27
2. Vibration-based SHM of Dębica railway steel bridge with optimized ANN and ANFIS;Journal of Constructional Steel Research;2024-04
3. APPEND: Rethinking ASIP Synthesis in the Era of AI;2023 60th ACM/IEEE Design Automation Conference (DAC);2023-07-09
4. Compiler Technologies in Deep Learning Co-Design: A Survey;Intelligent Computing;2023-01