1. Harlap Aaron, Narayanan Deepak, Amar Phanishayee, and et al. 2018. PipeDream: Pipeline Parallelism for DNN Training. In Proceedings of SysML'18 .
2. Krizhevsky Alex. 2014. One weird trick for parallelizing convolutional neural networks. arXiv:1404.5997 (2014).
3. Chi-Chung Chen, Chia-Lin Yang, and Hsiang-Yun Cheng. 2018. Efficient and Robust Parallel DNN Training through Model Parallelism on Multi-GPU Platform. arXiv:1809.02839 (2018).
4. Henggang Cui, James Cipar, et al. 2014. Exploiting Bounded Staleness to Speed Up Big Data Analytics. In Proceedings of ATC'14 .
5. Wei Dai, Abhimanu Kumar, Jinliang Wei, et al. 2015. High-performance Distributed ML at Scale Through Parameter Server Consistency Models. In Proceedings of AAAI'15 .