Author:
Galeazzo Flavio Cesar Cunha,Weiß R. Gregor,Lesnik Sergey,Rusche Henrik,Ruopp Andreas
Abstract
Abstract
The performance of OpenFOAM in strong scaling tests on HPC systems with AMD EPYC processors exhibits a pronounced superlinear speedup. Simple test cases show superlinear speedups of over 300%, which significantly impacts the efficient use of computing resources.
With the last generation of HPC architectures, a superlinear speedup of about 10% to 20% was well expected and accepted by CFD users [1]. The measured superlinear speedup is much more pronounced and withstands the communication overhead to even larger scales.
A detailed performance analysis of OpenFOAM follows, employing various High-Performance Computing (HPC) architectures, including AMD, ARM and Intel systems. The performance metric FVOPS (Finite VOlumes solved Per Second) is introduced to compare the performance of Computational Fluid Dynamics (CFD) applications when varying the grid size, as occurs in a strong scaling test. The achievable FVOPS depends on various factors, including the simulation type, boundary conditions, and especially the grid size of a use case. Analysing FVOPS on a single node level with varying grid size shows a significant difference in performance and cache utilization, which explains the large superlinear speedups seen in the strong scaling tests.
Furthermore, FVOPS can be used as a simple benchmark to determine the optimal number of grid elements per rank to simulate a given use case at peak efficiency on a given platform, resulting in time, energy, and cost savings.
The FVOPS metric also facilitates the direct comparison between different HPC architectures. The tests using AMD, ARM, and Intel processors show a peak in performance when employing around 10,000 grid elements per core. The presence of a large L3 cache on AMD processors is particularly advantageous, as indicated by L3 cache miss rates observed on AMD EPYC processors. Our results suggest that future HPC architectures with larger caches and higher memory bandwidth would benefit the CFD community.