Affiliation:
1. Brown University
2. Univ. of Electr. Sci. & Tech. of China
3. Los Alamos National Laboratory
4. Pacific Northwest National Laboratory
5. Massachusetts Institute of Technology and Brown University
Abstract
Going deeper and wider in neural architectures improves their accuracy, while the limited GPU DRAM places an undesired restriction on the network design domain. Deep Learning (DL) practitioners either need to change to less desired network architectures, or nontrivially dissect a network across multiGPUs. These distract DL practitioners from concentrating on their original machine learning tasks. We present SuperNeurons: a dynamic GPU memory scheduling runtime to enable the network training far beyond the GPU DRAM capacity. SuperNeurons features 3 memory optimizations,
Liveness Analysis, Unified Tensor Pool
, and
Cost-Aware Recomputation
; together they effectively reduce the network-wide peak memory usage down to the maximal memory usage among layers. We also address the performance issues in these memory-saving techniques. Given the limited GPU DRAM, SuperNeurons not only provisions the necessary memory for the training, but also dynamically allocates the memory for convolution workspaces to achieve the high performance. Evaluations against Caffe, Torch, MXNet and TensorFlow have demonstrated that SuperNeurons trains at least 3.2432 deeper network than current ones with the leading performance. Particularly, SuperNeurons can train ResNet2500 that has 10
4
basic network layers on a 12GB K40c.
Funder
VMware
Mellanox
Oracle
Natural Science Foundation of China
Google
DARPA
Central Universities of China
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Graphics and Computer-Aided Design,Software
Reference28 articles.
1. Mxnet's graph representation of neural networks. http://mxnet.io/architecture/note_memory.html. Mxnet's graph representation of neural networks. http://mxnet.io/architecture/note_memory.html.
2. Learning long-term dependencies with gradient descent is difficult
Cited by
138 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献