Co-scheduling HPC workloads on cache-partitioned CMP platforms

Author:

Aupy Guillaume1ORCID,Benoit Anne23,Goglin Brice1,Pottier Loïc2,Robert Yves24ORCID

Affiliation:

1. Inria, Université de Bordeaux, France

2. Laboratoire LIP, École Normale Supérieure de Lyon, France

3. CSE, Georgia Institute of Technology, Atlanta, USA

4. ICL, University of Tennessee, Knoxville, USA

Abstract

With the recent advent of many-core architectures such as chip multiprocessors (CMPs), the number of processing units accessing a global shared memory is constantly increasing. Co-scheduling techniques are used to improve application throughput on such architectures, but sharing resources often generates critical interferences. In this article, we focus on the interferences in the last level of cache (LLC) and use the Cache Allocation Technology (CAT) recently provided by Intel to partition the LLC and give each co-scheduled application their own cache area. We consider m iterative HPC applications running concurrently and answer to the following questions: (i) How to precisely model the behavior of these applications on the cache-partitioned platform? and (ii) how many cores and cache fractions should be assigned to each application to maximize the platform efficiency? Here, platform efficiency is defined as maximizing the performance either globally, or as guaranteeing a fixed ratio of iterations per second for each application. Through extensive experiments using CAT, we demonstrate the impact of cache partitioning when multiple HPC applications are co-scheduled onto CMP platforms.

Publisher

SAGE Publications

Subject

Hardware and Architecture,Theoretical Computer Science,Software

Cited by 6 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. PFA: Performance and Fairness-Aware LLC Partitioning Method;Algorithms and Architectures for Parallel Processing;2022

2. Analytical and Numerical Evaluation of Co-Scheduling Strategies and Their Application;Computers;2021-10-02

3. Playing Fetch with CAT;Proceedings of the 17th International Workshop on Data Management on New Hardware (DaMoN 2021);2021-06-20

4. Interference-aware execution framework with Co-scheML on GPU clusters;Cluster Computing;2021-05-18

5. Sequence-Based Selection Hyper-Heuristic Model via MAP-Elites;IEEE Access;2021

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3