Affiliation:
1. University of Michigan, Ann Arbor, MI, USA
2. Hongik University, Seoul, South Korea
Abstract
The demand for multitasking on graphics processing units (GPUs) is constantly increasing as they have become one of the default components on modern computer systems along with traditional processors (CPUs). Preemptive multitasking on CPUs has been primarily supported through context switching. However, the same preemption strategy incurs substantial overhead due to the large context in GPUs. The overhead comes in two dimensions: a preempting kernel suffers from a long preemption latency, and the system throughput is wasted during the switch. Without precise control over the large preemption overhead, multitasking on GPUs has little use for applications with strict latency requirements.
In this paper, we propose Chimera, a collaborative preemption approach that can precisely control the overhead for multitasking on GPUs. Chimera first introduces streaming multiprocessor (SM) flushing, which can instantly preempt an SM by detecting and exploiting idempotent execution. Chimera utilizes flushing collaboratively with two previously proposed preemption techniques for GPUs, namely context switching and draining to minimize throughput overhead while achieving a required preemption latency. Evaluations show that Chimera violates the deadline for only 0.2% of preemption requests when a 15us preemption latency constraint is used. For multi-programmed workloads, Chimera can improve the average normalized turnaround time by 5.5x, and system throughput by 12.2%.
Publisher
Association for Computing Machinery (ACM)
Cited by
19 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. DeInfer: A GPU resource allocation algorithm with spatial sharing for near-deterministic inferring tasks;Proceedings of the 53rd International Conference on Parallel Processing;2024-08-12
2. GhOST: a GPU Out-of-Order Scheduling Technique for Stall Reduction;2024 ACM/IEEE 51st Annual International Symposium on Computer Architecture (ISCA);2024-06-29
3. Automated Backend Allocation for Multi-Model, On-Device AI Inference;Proceedings of the ACM on Measurement and Analysis of Computing Systems;2023-12-07
4. Secure and Timely GPU Execution in Cyber-physical Systems;Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security;2023-11-15
5. Virtual PIM: Resource-Aware Dynamic DPU Allocation and Workload Scheduling Framework for Multi-DPU PIM Architecture;2023 32nd International Conference on Parallel Architectures and Compilation Techniques (PACT);2023-10-21