Affiliation:
1. Imperial College London, United Kingdom
2. Google, United States
3. iSize, United Kingdom
4. Advantest, United States
5. United States
Abstract
The ever-growing computational demands of increasingly complex machine learning models frequently necessitate the use of powerful cloud-based infrastructure for their training. Binary neural networks are known to be promising candidates for on-device inference due to their extreme compute and memory savings over higher-precision alternatives. However, their existing training methods require the concurrent storage of high-precision activations for all layers, generally making learning on memory-constrained devices infeasible. In this article, we demonstrate that the backward propagation operations needed for binary neural network training are strongly robust to quantization, thereby making on-the-edge learning with modern models a practical proposition. We introduce a low-cost binary neural network training strategy exhibiting sizable memory footprint reductions while inducing little to no accuracy loss
vs
Courbariaux & Bengio’s standard approach. These decreases are primarily enabled through the retention of activations exclusively in binary format. Against the latter algorithm, our drop-in replacement sees memory requirement reductions of 3–5×, while reaching similar test accuracy (± 2 pp) in comparable time, across a range of small-scale models trained to classify popular datasets. We also demonstrate from-scratch ImageNet training of binarized ResNet-18, achieving a 3.78× memory reduction. Our work is open-source, and includes the Raspberry Pi-targeted prototype we used to verify our modeled memory decreases and capture the associated energy drops. Such savings will allow for unnecessary cloud offloading to be avoided, reducing latency, increasing energy efficiency, and safeguarding end-user privacy.
Publisher
Association for Computing Machinery (ACM)
Subject
Hardware and Architecture,Software
Reference51 articles.
1. Naman Agarwal, Ananda Theertha Suresh, Felix Yu, Sanjiv Kumar, and H. Brendan McMahan. 2018. CpSGD: Communication-efficient and differentially-private distributed SGD. In International Conference on Neural Information Processing Systems.
2. Milad Alizadeh, Javier Fernández-Marqués, Nicholas D. Lane, and Yarin Gal. 2018. An empirical study of binary neural networks’ optimisation. In International Conference on Learning Representations.
3. Jeremy Bernstein, Yu-Xiang Wang, Kamyar Azizzadenesheli, and Animashree Anandkumar. 2018. SignSGD: Compressed optimisation for non-convex problems. In International Conference on Machine Learning.
4. Back to simplicity: How to train accurate BNNs from scratch?;Bethge Joseph;arXiv preprint arXiv:1906.08637,2019
5. L. Susan Blackford Antoine Petitet Roldan Pozo Karin Remington R. Clint Whaley James Demmel Jack Dongarra Iain Duff Sven Hammarling Greg Henry Michael Heroux Linda Kaufman and Andrew Lumsdaine. 2002. An updated set of basic linear algebra subprograms (BLAS). ACM Trans. Math. Software 28 2 (2002) 135–151.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献