Affiliation:
1. University of British Columbia, Canada
Abstract
Heterogeneous systems that integrate a multicore CPU and a GPU on the same die are ubiquitous. On these systems, both the CPU and GPU share the same physical memory as opposed to using separate memory dies. Although integration eliminates the need to copy data between the CPU and the GPU, arranging transparent memory sharing between the two devices can carry large overheads. Memory on CPU/GPU systems is typically managed by a software framework such as OpenCL or CUDA, which includes a runtime library, and communicates with a GPU driver. These frameworks offer a range of memory management methods that vary in ease of use, consistency guarantees and performance. In this study, we analyze some of the common memory management methods of the most widely used software frameworks for heterogeneous systems: CUDA, OpenCL 1.2, OpenCL 2.0, and HSA, on NVIDIA and AMD hardware. We focus on performance/functionality trade-offs, with the goal of exposing their performance impact and simplifying the choice of memory management methods for programmers.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Graphics and Computer-Aided Design,Software
Reference18 articles.
1. AMD Graphics Core Next (GCN) Architecture. https://www.amd. com/Documents/GCN_Architecture_whitepaper.pdf 2012. AMD Graphics Core Next (GCN) Architecture. https://www.amd. com/Documents/GCN_Architecture_whitepaper.pdf 2012.
2. CL Offline Compiler. https://github.com/HSAFoundation/ CLOC 2017. CL Offline Compiler. https://github.com/HSAFoundation/ CLOC 2017.
3. Efficient Mapping of Irregular C++ Applications to Integrated GPUs
4. Rodinia: A benchmark suite for heterogeneous computing
Cited by
19 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献