Abstract
Large-scale datacenters (DCs) host tens of thousands of diverse applications each day. However, interference between colocated workloads and the difficulty of matching applications to one of the many hardware platforms available can degrade performance, violating the quality of service (QoS) guarantees that many cloud workloads require. While previous work has identified the impact of heterogeneity and interference, existing solutions are computationally intensive, cannot be applied online, and do not scale beyond a few applications.
We present Paragon, an online and scalable DC scheduler that is heterogeneity- and interference-aware. Paragon is derived from robust analytical methods, and instead of profiling each application in detail, it leverages information the system already has about applications it has previously seen. It uses collaborative filtering techniques to quickly and accurately classify an unknown incoming workload with respect to heterogeneity and interference in multiple shared resources. It does so by identifying similarities to previously scheduled applications. The classification allows Paragon to greedily schedule applications in a manner that minimizes interference and maximizes server utilization. After the initial application placement, Paragon monitors application behavior and adjusts the scheduling decisions at runtime to avoid performance degradations. Additionally, we design ARQ, a multiclass admission control protocol that constrains application waiting time. ARQ queues applications in separate classes based on the type of resources they need and avoids long queueing delays for easy-to-satisfy workloads in highly-loaded scenarios. Paragon scales to tens of thousands of servers and applications with marginal scheduling overheads in terms of time or state.
We evaluate Paragon with a wide range of workload scenarios, on both small and large-scale systems, including 1,000 servers on EC2. For a 2,500-workload scenario, Paragon enforces performance guarantees for 91% of applications, while significantly improving utilization. In comparison, heterogeneity-oblivious, interference-oblivious, and least-loaded schedulers only provide similar guarantees for 14%, 11%, and 3% of workloads. The differences are more striking in oversubscribed scenarios where resource efficiency is more critical.
Publisher
Association for Computing Machinery (ACM)
Reference65 articles.
1. Alameldeen A. R. and Wood D. A. 2006. IPC considered harmful for multiprocessor workloads. IEEE Micro (Special Issue on Computer Architecture Simulation and Modeling). 10.1109/MM.2006.73 Alameldeen A. R. and Wood D. A. 2006. IPC considered harmful for multiprocessor workloads. IEEE Micro (Special Issue on Computer Architecture Simulation and Modeling). 10.1109/MM.2006.73
2. Amazon EC2. http://aws.amazon.com/ec2/. Amazon EC2. http://aws.amazon.com/ec2/.
3. Warehouse-Scale Computing: Entering the Teenage Decade
Cited by
110 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Lavender: An Efficient Resource Partitioning Framework for Large-Scale Job Colocation;ACM Transactions on Architecture and Code Optimization;2024-09-14
2. PREACT: Predictive Resource Allocation for Bursty Workloads in a Co-located Data Center;Proceedings of the 53rd International Conference on Parallel Processing;2024-08-12
3. Software Resource Disaggregation for HPC with Serverless Computing;2024 IEEE International Parallel and Distributed Processing Symposium (IPDPS);2024-05-27
4. Characterizing In-Kernel Observability of Latency-Sensitive Request-Level Metrics with eBPF;2024 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS);2024-05-05
5. Inverse Response Time Ratio Scheduler: Optimizing Throughput and Response Time for Serverless Computing;2023 IEEE International Conference on Cloud Computing Technology and Science (CloudCom);2023-12-04