Optimizing inference serving on serverless platforms

Author:

Ali Ahsan1,Pinciroli Riccardo2,Yan Feng3,Smirni Evgenia4

Affiliation:

1. University of Nevada

2. Gran Sasso Science Institute, L'Aquila, Italy

3. University of Nevada, Reno

4. William and Mary

Abstract

Serverless computing is gaining popularity for machine learning (ML) serving workload due to its autonomous resource scaling, easy to use and pay-per-use cost model. Existing serverless platforms work well for image-based ML inference, where requests are homogeneous in service demands. That said, recent advances in natural language processing could not fully benefit from existing serverless platforms as their requests are intrinsically heterogeneous. Batching requests for processing can significantly increase ML serving efficiency while reducing monetary cost, thanks to the pay-per-use pricing model adopted by serverless platforms. Yet, batching heterogeneous ML requests leads to additional computation overhead as small requests need to be "padded" to the same size as large requests within the same batch. Reaching effective batching decisions (i.e., which requests should be batched together and why) is non-trivial: the padding overhead coupled with the serverless auto-scaling forms a complex optimization problem. To address this, we develop Multi-Buffer Serving (MBS), a framework that optimizes the batching of heterogeneous ML inference serving requests to minimize their monetary cost while meeting their service level objectives (SLOs). The core of MBS is a performance and cost estimator driven by analytical models supercharged by a Bayesian optimizer. MBS is prototyped and evaluated on AWS using bursty workloads. Experimental results show that MBS preserves SLOs while outperforming the state-of-the-art by up to 8 x in terms of cost savings while minimizing the padding overhead by up to 37 x with 3 x less number of serverless function invocations.

Publisher

Association for Computing Machinery (ACM)

Subject

General Earth and Planetary Sciences,Water Science and Technology,Geography, Planning and Development

Reference83 articles.

1. Martín Abadi , Paul Barham , Jianmin Chen , Zhifeng Chen , Andy Davis , Jeffrey Dean , Matthieu Devin , Sanjay Ghemawat , Geoffrey Irving , Michael Isard , Manjunath Kudlur , Josh Levenberg , Rajat Monga , Sherry Moore , Derek Gordon Murray , Benoit Steiner , Paul A. Tucker , Vijay Vasudevan , Pete Warden , Martin Wicke , Yuan Yu , and Xiaoqiang Zheng . 2016 . Tensorflow: A system for large-scale machine learning . In Proceedings of the Symposium on Operating Systems Design and Implementation (OSDI). USENIX, 265--283 . Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek Gordon Murray, Benoit Steiner, Paul A. Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. Tensorflow: A system for large-scale machine learning. In Proceedings of the Symposium on Operating Systems Design and Implementation (OSDI). USENIX, 265--283.

2. Stateful functions as a service in action

3. BATCH: Machine Learning Inference Serving on Serverless Platforms with Adaptive Batching

4. Ahsan Ali , Hemant Sharma , Rajkumar Kettimuthu , Peter Kenesei , Dennis Trujillo , Antonino Miceli , Ian Foster , Ryan Coffee , Jana Thayer , and Zhengchun Liu . 2022. fairDMS: Rapid Model Training by Data and Model Reuse. arXiv preprint arXiv:2204.09805 ( 2022 ). Ahsan Ali, Hemant Sharma, Rajkumar Kettimuthu, Peter Kenesei, Dennis Trujillo, Antonino Miceli, Ian Foster, Ryan Coffee, Jana Thayer, and Zhengchun Liu. 2022. fairDMS: Rapid Model Training by Data and Model Reuse. arXiv preprint arXiv:2204.09805 (2022).

5. Ahsan Ali , Syed Zawad , Paarijaat Aditya , Istemi Ekin Akkus , Ruichuan Chen, and Feng Yan. 2022 . SMLT : A Serverless Framework for Scalable and Adaptive Machine Learning Design and Training . arXiv preprint arXiv:2205.01853 (2022). Ahsan Ali, Syed Zawad, Paarijaat Aditya, Istemi Ekin Akkus, Ruichuan Chen, and Feng Yan. 2022. SMLT: A Serverless Framework for Scalable and Adaptive Machine Learning Design and Training. arXiv preprint arXiv:2205.01853 (2022).

Cited by 26 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Experimental evaluation of architectural software performance design patterns in microservices;Journal of Systems and Software;2024-12

2. Tangram: High-Resolution Video Analytics on Serverless Platform with SLO-Aware Batching;2024 IEEE 44th International Conference on Distributed Computing Systems (ICDCS);2024-07-23

3. Empirical architecture comparison of two-input machine learning systems for vision tasks;Formal Aspects of Computing;2024-06-27

4. FSD-Inference: Fully Serverless Distributed Inference with Scalable Cloud Communication;2024 IEEE 40th International Conference on Data Engineering (ICDE);2024-05-13

5. Online Container Caching with Late-Warm for IoT Data Processing;2024 IEEE 40th International Conference on Data Engineering (ICDE);2024-05-13

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3