Baymax

Author:

Chen Quan1,Yang Hailong2,Mars Jason3,Tang Lingjia3

Affiliation:

1. Shanghai Jiao Tong University, Ann Arbor, MI, USA

2. Beihang University, Ann Arbor, USA

3. University of Michigan, Ann Arbor, USA

Abstract

Modern warehouse-scale computers (WSCs) are being outfitted with accelerators to provide the significant compute required by emerging intelligent personal assistant (IPA) workloads such as voice recognition, image classification, and natural language processing. It is well known that the diurnal user access pattern of user-facing services provides a strong incentive to co-locate applications for better accelerator utilization and efficiency, and prior work has focused on enabling co-location on multicore processors. However, interference when co-locating applications on non-preemptive accelerators is fundamentally different than contention on multi-core CPUs and introduces a new set of challenges to reduce QoS violation. To address this open problem, we first identify the underlying causes for QoS violation in accelerator-outfitted servers. Our experiments show that queuing delay for the compute resources and PCI-e bandwidth contention for data transfer are the main two factors that contribute to the long tails of user-facing applications. We then present Baymax, a runtime system that orchestrates the execution of compute tasks from different applications and mitigates PCI-e bandwidth contention to deliver the required QoS for user-facing applications and increase the accelerator utilization. Using DjiNN, a deep neural network service, Sirius, an end-to-end IPA workload, and traditional applications on a Nvidia K40 GPU, our evaluation shows that Baymax improves the accelerator utilization by 91.3% while achieving the desired 99%-ile latency target for for user-facing applications. In fact, Baymax reduces the 99%-ile latency of user-facing applications by up to 195x over default execution.

Funder

National Science Foundation

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Graphics and Computer-Aided Design,Software

Reference68 articles.

1. Daniel Povey Arnab Ghoshal Gilles Boulianne Lukás Burget Ondrej Glembek Nagendra Goel Mirko Hannemann Petr Motlıcek Yanmin Qian Petr Schwarz etal The Kaldi Speech Recognition Toolkit. 2011. Daniel Povey Arnab Ghoshal Gilles Boulianne Lukás Burget Ondrej Glembek Nagendra Goel Mirko Hannemann Petr Motlıcek Yanmin Qian Petr Schwarz et al. The Kaldi Speech Recognition Toolkit. 2011.

2. SURF: Speeded Up Robust Features

3. Qualcomm Acquires Kooaba Visual Recognition Company. http://mobilemarketingmagazine.com/qualcomm-acquires-kooaba-visual-recognition-company. Qualcomm Acquires Kooaba Visual Recognition Company. http://mobilemarketingmagazine.com/qualcomm-acquires-kooaba-visual-recognition-company.

4. Introduction to the CoNLL-2000 shared task

Cited by 83 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Improving Cluster Utilization Through Adaptive Resource Management for Deep Neural Network and CPU Jobs Colocation;IEEE Transactions on Computers;2023-12

2. Maximizing the Utilization of GPUs Used by Cloud Gaming through Adaptive Co-location with Combo;Proceedings of the 2023 ACM Symposium on Cloud Computing;2023-10-30

3. Enabling Efficient Spatio-Temporal GPU Sharing for Network Function Virtualization;IEEE Transactions on Computers;2023-10

4. System Virtualization for Neural Processing Units;Proceedings of the 19th Workshop on Hot Topics in Operating Systems;2023-06-22

5. V10: Hardware-Assisted NPU Multi-tenancy for Improved Resource Utilization and Fairness;Proceedings of the 50th Annual International Symposium on Computer Architecture;2023-06-17

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3