Author:
Sammel Dirk,Boehler Michael,Gamel Anton J.,Schumacher Markus
Abstract
AbstractA data caching setup has been implemented for the High Energy Physics (HEP) computing infrastructure in Freiburg, Germany, as a possible alternative to local long-term storage. Files are automatically cached on disk upon first request by a client, can be accessed from cache for subsequent requests, and are deleted after predefined conditions are met. The required components are provided to a dedicated HEP cluster, and, via virtual research environments, to the opportunistically used High-Performance Computing (HPC) Cluster NEMO (Neuroscience, Elementary Particle Physics, Microsystems Engineering and Materials Science). A typical HEP workflow has been implemented as benchmark test to identify any overhead introduced by the caching setup with respect to direct, non-cached data access, and to compare the performance of cached and non-cached access to several external files sources. The results indicate no significant overhead in the workflow and faster file access with the caching setup, especially for geographically distant file sources. Additionally, the hardware requirements for various numbers of parallel file requests were measured for estimating future requirements.
Funder
Bundesministerium für Bildung und Forschung
Albert-Ludwigs-Universität Freiburg im Breisgau
Publisher
Springer Science and Business Media LLC
Subject
Nuclear and High Energy Physics,Computer Science (miscellaneous),Software
Reference20 articles.
1. Adaptive Computing Enterprises, Inc (2011) Introduction to Cloud for HPC. Technical report.
2. Alfieri R, Cecchini R, Ciaschini V, et al (2004) VOMS, an authorization system for virtual organizations. In: Fernández Rivera F, Bubak M, Gómez Tato A, et al (eds) AxGrids 2003: Grid Computing, Lecture Notes in Computer Science, vol 2970. Springer, Berlin, Heidelberg, Germany, pp 33–40, https://doi.org/10.1007/978-3-540-24689-3_5
3. ATLAS Collaboration (2022) ATLAS Software and Computing HL-LHC Roadmap. Technical report, CERN, Geneva, https://cds.cern.ch/record/2802918
4. Bierlich C, Chakraborty S, Desai N, et al (2022) A comprehensive guide to the physics and usage of PYTHIA 83. SciPost Phys Codebases. https://doi.org/10.21468/SciPostPhysCodeb.8
5. Bos K, Brook N, Duellmann D, et al (eds) (2005) LHC computing grid: technical design report. CERN, Geneva, https://cds.cern.ch/record/840543