Affiliation:
1. Computer Science, The University of Alabama in Huntsville, Huntsville, United States
Abstract
Parallelizing code in a shared-memory environment is commonly done utilizing loop scheduling (LS) in a fork-join manner as in OpenMP. This manner of parallelization is popular due to its ease to code, but the choice of the LS method is important when the workload per iteration is highly variable. Currently, the shared-memory environment is evolving in high-performance computing as larger chiplet based processors with high core counts and segmented L3 cache are introduced. These processors have a stronger nonuniform memory access (NUMA) effect than the previous generation of x86-64 processors. This work attempts to modify the adaptive self-scheduling loop scheduler known as
iCh
(
i
rregular
Ch
unk) for these NUMA environments while analyzing the impact of these systems on default OpenMP LS methods. In particular,
iCh
is as a default LS method for irregular applications (i.e., applications where the workload per iteration is highly variable) that guarantees “good” performance without tuning. The modified version, named
NiCh
, is demonstrated over multiple irregular applications to show the variation in performance. The work demonstrates that
NiCh
is able to better handle architectures with stronger NUMA effects, and in particular is better than
iCh
when the number of threads is greater than the number of cores. However,
NiCh
also comes with being less universally “good” as
iCh
and a set of parameters that are hardware dependent.
Publisher
Association for Computing Machinery (ACM)
Reference25 articles.
1. Legion: Expressing locality and independence with logical regions
2. Timothy J Boerner, Stephen Deems, Thomas R Furlani, Shelley L Knuth, and John Towns. 2023. Access: Advancing innovation: NSF’s advanced cyberinfrastructure coordination ecosystem: Services & support. In Practice and Experience in Advanced Research Computing. 173–176.
3. An adaptive self‐scheduling loop scheduler
4. Rodinia: A benchmark suite for heterogeneous computing