Author:
Stagni Federico,Valassi Andrea,Romanovskiy Vladimir
Abstract
High Performance Computing (HPC) supercomputers are expected to play an increasingly important role in HEP computing in the coming years. While HPC resources are not necessarily the optimal fit for HEP workflows, computing time at HPC centers on an opportunistic basis has already been available to the LHC experiments for some time, and it is also possible that part of the pledged computing resources will be offered as CPU time allocations at HPC centers in the future. The integration of the experiment workflows to make the most efficient use of HPC resources is therefore essential. This paper describes the work that has been necessary to integrate LHCb workflows at a specific HPC site, the Marconi-A2 system at CINECA in Italy, where LHCb benefited from a joint PRACE (Partnership for Advanced Computing in Europe) allocation with the other Large Hadron Collider (LHC) experiments. This has required addressing two types of challenges: on the software application workloads, for optimising their performance on a many-core hardware architecture that differs significantly from those traditionally used in WLCG (Worldwide LHC Computing Grid), by reducing memory footprint using a multi-process approach; and in the distributed computing area, for submitting these workloads using more than one logical processor per job, which had never been done yet in LHCb.
Reference21 articles.
1. Stagni F. et al., DIRACGrid/DIRAC (2018). https://doi.org/10.5281/zenodo.1451647
2. LHCb Coll., LHCbDIRAC (2018). https://doi.org/10.5281/zenodo.1451768
3. Boccali T. et al., Extension of the INFN Tier-1 on a HPC system, to appear in Proc. CHEP2019, Adelaide (2019). https://indico.cern.ch/event/773049/contributions/3474805
4. The LHCb Simulation Application, Gauss: Design, Evolution and Experience
5. Distributing LHC application software and conditions databases using the CernVM file system
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献