SplitRPC: A {Control + Data} Path Splitting RPC Stack for ML Inference Serving

Author:

Kumar Adithya1ORCID,Sivasubramaniam Anand2ORCID,Zhu Timothy1ORCID

Affiliation:

1. The Pennsylvania State University, University Park, PA, USA

2. Penn State University, University Park, PA, USA

Abstract

The growing adoption of hardware accelerators driven by their intelligent compiler and runtime system counterparts has democratized ML services and precipitously reduced their execution times. This motivates us to shift our attention to characterize the overheads imposed by the RPC mechanism (`RPC tax') when serving them on accelerators. Conventional RPC implementations implicitly assume the host CPU services the requests, and we focus on expanding such works towards accelerator-based services. While SmartNIC based solutions work well for simple applications, serving complex ML models requires a more nuanced view to optimize both the data-path and the control/orchestration of these accelerators. We program commodity network interface cards (NICs) to split the control and data paths for effective transfer of control while efficiently transferring the payload to the accelerator. As opposed to unified approaches that bundle these paths together, limiting the flexibility in each of these paths, we design and implement SplitRPC - a {control + data} path optimizing RPC mechanism for ML inference serving. SplitRPC allows us to optimize the datapath to the accelerator while simultaneously allowing the CPU to maintain full orchestration capabilities. We implement SplitRPC on both commodity NICs and SmartNICs and demonstrate that SplitRPC is effective in minimizing the RPC tax while providing significant gains in throughput and latency.

Funder

National Science Foundation

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Networks and Communications,Hardware and Architecture,Software

Reference6 articles.

1. Tianqi Chen Thierry Moreau Ziheng Jiang Lianmin Zheng Eddie Yan Haichen Shen Meghan Cowan Leyuan Wang Yuwei Hu Luis Ceze etal 2018. {TVM}: An automated end-to-end optimizing compiler for deep learning. In 13th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 18). USENIX Association Boston MA 578--594. Tianqi Chen Thierry Moreau Ziheng Jiang Lianmin Zheng Eddie Yan Haichen Shen Meghan Cowan Leyuan Wang Yuwei Hu Luis Ceze et al. 2018. {TVM}: An automated end-to-end optimizing compiler for deep learning. In 13th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 18). USENIX Association Boston MA 578--594.

2. Daniel Crankshaw , Xin Wang , Guilio Zhou , Michael J Franklin , Joseph E Gonzalez , and Ion Stoica . 2017 . Clipper: A low-latency online prediction serving system . In Proceedings of the Conference on Networked Systems Design and Implementation (NSDI). USENIX Association , Boston, MA, USA, 613--627. Daniel Crankshaw, Xin Wang, Guilio Zhou, Michael J Franklin, Joseph E Gonzalez, and Ion Stoica. 2017. Clipper: A low-latency online prediction serving system. In Proceedings of the Conference on Networked Systems Design and Implementation (NSDI). USENIX Association, Boston, MA, USA, 613--627.

3. Google. 2018. GRPC Framework . https://grpc.io/. [Online ; accessed 17- Apr- 2022 ]. Google. 2018. GRPC Framework. https://grpc.io/. [Online; accessed 17-Apr-2022].

4. Anuj Kalia , Michael Kaminsky , and David Andersen . 2019 . Datacenter {RPCs} can be General and Fast . In Proceedings of the Conference on Networked Systems Design and Implementation (NSDI). USENIX Association , Boston, MA, USA, 1--16. Anuj Kalia, Michael Kaminsky, and David Andersen. 2019. Datacenter {RPCs} can be General and Fast. In Proceedings of the Conference on Networked Systems Design and Implementation (NSDI). USENIX Association, Boston, MA, USA, 1--16.

5. NVIDIA. 2022. CUDA GPUDirect RDMA. https://docs.nvidia.com/cuda/gpudirect-rdma/index.html. [Online ; accessed 17- Apr- 2022 ]. NVIDIA. 2022. CUDA GPUDirect RDMA. https://docs.nvidia.com/cuda/gpudirect-rdma/index.html. [Online; accessed 17-Apr-2022].

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3