Author:
Pérez-Calero Yzquierdo A.,Mascheroni M.,Acosta Flechas M.,Dost J.,Haleem S.,Hurtado Anampa K.,Khan F. A.,Kizinevič E.,Peregonov N.,
Abstract
The CMS experiment at CERN employs a distributed computing infrastructure to satisfy its data processing and simulation needs. The CMS Submission Infrastructure team manages a dynamic HTCondor pool, aggregating mainly Grid clusters worldwide, but also HPC, Cloud and opportunistic resources. This CMS Global Pool, which currently involves over 70 computing sites worldwide and peaks at 350k CPU cores, is employed to successfully manage the simultaneous execution of up to 150k tasks. While the present infrastructure is sufficient to harness the current computing power scales, CMS latest estimates predict a noticeable expansion in the amount of CPU that will be required in order to cope with the massive data increase of the High-Luminosity LHC (HL-LHC) era, planned to start in 2027. This contribution presents the latest results of the CMS Submission Infrastructure team in exploring and expanding the scalability reach of our Global Pool, in order to preventively detect and overcome any barriers in relation to the HL-LHC goals, while maintaining high effciency in our workload scheduling and resource utilization.
Reference20 articles.
1. The Glidein-based Workflow Management System, https://glideinwms.fnal.gov/doc.prd/index.html, accessed February, 2021.
2. HTCondor public web site, https://research.cs.wisc.edu/htcondor/index.html, accessed February, 2021.
3. TheWorldwide LHC Computing Grid http://wlcg.web.cern.ch, accessed February, 2021.
4. Exploiting private and commercial clouds to generate on-demand CMS computing facilities with DODAS
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献