OWL

Author:

Jog Adwait1,Kayiran Onur1,Chidambaram Nachiappan Nachiappan1,Mishra Asit K.2,Kandemir Mahmut T.1,Mutlu Onur3,Iyer Ravishankar2,Das Chita R.1

Affiliation:

1. The Pennsylvania State University, University Park, PA, USA

2. Intel Labs, Hillsboro, OR, USA

3. Carnegie Mellon University, Pittsburgh, PA, USA

Abstract

Emerging GPGPU architectures, along with programming models like CUDA and OpenCL, offer a cost-effective platform for many applications by providing high thread level parallelism at lower energy budgets. Unfortunately, for many general-purpose applications, available hardware resources of a GPGPU are not efficiently utilized, leading to lost opportunity in improving performance. A major cause of this is the inefficiency of current warp scheduling policies in tolerating long memory latencies. In this paper, we identify that the scheduling decisions made by such policies are agnostic to thread-block, or cooperative thread array (CTA), behavior, and as a result inefficient. We present a coordinated CTA-aware scheduling policy that utilizes four schemes to minimize the impact of long memory latencies. The first two schemes, CTA-aware two-level warp scheduling and locality aware warp scheduling, enhance per-core performance by effectively reducing cache contention and improving latency hiding capability. The third scheme, bank-level parallelism aware warp scheduling, improves overall GPGPU performance by enhancing DRAM bank-level parallelism. The fourth scheme employs opportunistic memory-side prefetching to further enhance performance by taking advantage of open DRAM rows. Evaluations on a 28-core GPGPU platform with highly memory-intensive applications indicate that our proposed mechanism can provide 33% average performance improvement compared to the commonly-employed round-robin warp scheduling policy.

Publisher

Association for Computing Machinery (ACM)

Reference58 articles.

1. AMD. Radeon and FirePro Graphics Cards Nov. 2011. AMD. Radeon and FirePro Graphics Cards Nov. 2011.

2. AMD. Heterogeneous Computing: OpenCL and the ATI Radeon HD 5870 (Evergreen) Architecture Oct. 2012. AMD. Heterogeneous Computing: OpenCL and the ATI Radeon HD 5870 (Evergreen) Architecture Oct. 2012.

3. Throughput-Effective On-Chip Networks for Manycore Accelerators

4. Analyzing CUDA workloads using a detailed GPU simulator

Cited by 3 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Application aware Scalable Architecture for GPGPU;Journal of Systems Architecture;2018-09

2. CWLP: coordinated warp scheduling and locality-protected cache allocation on GPUs;Frontiers of Information Technology & Electronic Engineering;2018-02

3. Locality‐protected cache allocation scheme with low overhead on GPUs;IET Computers & Digital Techniques;2018-01-12

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3