Affiliation:
1. Jeremiah Horrocks Institute for Mathematics, Physics and Astronomy, University of Central Lancashire , Preston PR1 2HE, United Kingdom
Abstract
It is shown that micromagnetic and atomistic spin dynamics simulations can use multiple graphical processing units (GPUs) not only to reduce computation time but also to allow for a larger simulation size than is possible on a single GPU. While interactions that depend on neighboring spins, such as exchange interactions, may be implemented efficiently by transferring data between GPUs using halo regions or direct memory accesses, implementing the long-range demagnetizing interaction is the main difficulty in achieving good performance scaling, where the data transfer rate between GPUs is a significant bottleneck. A multi-GPU convolution algorithm is developed here, which relies on single-GPU FFTs executed in parallel. It is shown that even for micromagnetic simulations where the demagnetizing interaction computation time dominates, good performance scaling may be achieved, with speedup factors up to 1.8, 2.5, and 3.1, for two, three, and four GPUs, respectively. The code developed here can be used for any number of GPUs in parallel, with performance scaling strongly dependent on the inter-GPU data transfer rate and connection topology. This is further improved in micromagnetic simulations, which include a spin transport solver, obtaining speedup factors up to 1.96, 2.8, and 3.7, for two, three, and four GPUs, respectively. The best case scenario is obtained for atomistic simulations, where the demagnetizing interaction is implemented with spin-averaged cells. Using a single workstation with four GPUs, it is shown that atomistic spin dynamics simulations with up to 1 × 109 spins and atomistic Monte Carlo simulations with up to 2 × 109 spins are possible, with near-ideal performance scaling.
Subject
General Physics and Astronomy
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献