Author:
Biscarat Catherine,Boccali Tommaso,Bonacorsi Daniele,Bozzi Concezio,Costanzo Davide,Duellmann Dirk,Elmsheuser Johannes,Fede Eric,Flix Molina José,Giordano Domenico,Grigoras Costin,Iven Jan,Jouvin Michel,Kemp Yves,Lange David,Maganza Riccardo,Meinhard Helge,Michelotto Michele,Roy Gareth Douglas,Sansum Andrew,Sartirana Andrea,Schulz Markus,Sciabà Andrea,Smirnova Oxana,Stewart Graeme,Valassi Andrea,Vernet Renaud,Wenaus Torre,Wuerthwein Frank
Abstract
The increase in the scale of LHC computing during Run 3 and Run 4 (HL-LHC) will certainly require radical changes to the computing models and the data processing of the LHC experiments. The working group established by WLCG and the HEP Software Foundation to investigate all aspects of the cost of computing and how to optimise them has continued producing results and improving our understanding of this process. In particular, experiments have developed more sophisticated ways to calculate their resource needs, we have a much more detailed process to calculate infrastructure costs. This includes studies on the impact of HPC and GPU based resources on meeting the computing demands. We have also developed and perfected tools to quantitatively study the performance of experiments workloads and we are actively collaborating with other activities related to data access, benchmarking and technology cost evolution. In this contribution we expose our recent developments and results and outline the directions of future work.
Reference12 articles.
1. System Performance and Cost Modelling in LHC computing
2. ATLAS preliminary resource extrapolations, https://twiki.cern.ch/twiki/bin/view/AtlasPublic/ComputingandSoftwarePublicResults
3. Espinal X. et al, The Quest to solve the HL-LHC data access puzzle. The first year of the DOMA ACCESS Working Group, these proceedings
4. Valassi A. et al, Using HEP experiment workflows for the benchmarking and accounting of computing resources, these proceedings
5. Stewart G. and Mete A.S., PrMon, https://doi.org/10.5281/zenodo.2554202