Author:
Hufnagel Dirk,Holzman Burt,Mason David,Mhashilkar Parag,Timm Steven,Tiradani Anthony,Aftab Khan Farrukh,Gutsche Oliver,Bloom Kenneth
Abstract
The higher energy and luminosity from the LHC in Run 2 have put increased pressure on CMS computing resources. Extrapolating to even higher luminosities (and thus higher event complexities and trigger rates) beyond Run 3, it becomes clear that simply scaling up the the current model of CMS computing alone will become economically unfeasible. High Performance Computing (HPC) facilities, widely used in scientific computing outside of HEP, have the potential to help fill the gap. Here we describe the U.S.CMS efforts to integrate US HPC resources into CMS Computing via the HEPCloud project at Fermilab. We present advancements in our ability to use NERSC resources at scale and efforts to integrate other HPC sites as well. We present experience in the elastic use of HPC resources, quickly scaling up use when so required by CMS workflows. We also present performance studies of the CMS multi-threaded framework on both Haswell and KNL HPC resources.
Reference10 articles.
1. Stability and scalability of the CMS Global Pool: Pushing HTCondor and glideinWMS to new limits
2. XSEDE: Accelerating Scientific Discovery
3. Bloom K. et al.,
“Any Data, Any Time, Anywhere: Global Data Access for Science,” arXiv:1508.01443 [physics.comp-ph].
4. HTCondor homepage:
http://research.cs.wisc.edu/htcondor/
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献