Affiliation:
1. Technical University of Munich, Boltzmannstrasse, Garching, Germany
Abstract
Deep Learning (DL) has had an immense success in the recent past, leading to state-of-the-art results in various domains, such as image recognition and natural language processing. One of the reasons for this success is the increasing size of DL models and the proliferation of vast amounts of training data being available. To keep on improving the performance of DL, increasing the scalability of DL systems is necessary. In this survey, we perform a broad and thorough investigation on challenges, techniques and tools for scalable DL on distributed infrastructures. This incorporates infrastructures for DL, methods for parallel DL training, multi-tenant resource scheduling, and the management of training and model data. Further, we analyze and compare 11 current open-source DL frameworks and tools and investigate which of the techniques are commonly implemented in practice. Finally, we highlight future research trends in DL systems that deserve further research.
Publisher
Association for Computing Machinery (ACM)
Subject
General Computer Science,Theoretical Computer Science
Reference211 articles.
1. NVIDIA. NVIDIA Collective Communications Library (NCCL). Retrieved from https://developer.nvidia.com/nccl. NVIDIA. NVIDIA Collective Communications Library (NCCL). Retrieved from https://developer.nvidia.com/nccl.
2. NVIDIA. NVIDIA DGX Station. Retrieved from https://www.nvidia.com/en-us/data-center/dgx-station/. NVIDIA. NVIDIA DGX Station. Retrieved from https://www.nvidia.com/en-us/data-center/dgx-station/.
3. ONNX Project Contributors. ONNX. Retrieved from https://onnx.ai/. ONNX Project Contributors. ONNX. Retrieved from https://onnx.ai/.
4. Deep Learning with Differential Privacy
Cited by
117 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献