Author:
Schnepf Matthias J.,von Cube R. Florian,Fischer Max,Giffels Manuel,Heidecker Christoph,Heiss Andreas,Kuehn Eileen,Petzold Andreas,Quast Guenter,Sauter Martin
Abstract
Demand for computing resources in high energy physics (HEP) shows a highly dynamic behavior, while the provided resources by the Worldwide LHC Computing Grid (WLCG) remains static. It has become evident that opportunistic resources such as High Performance Computing (HPC) centers and commercial clouds are well suited to cover peak loads. However, the utilization of these resources gives rise to new levels of complexity, e.g. resources need to be managed highly dynamically and HEP applications require a very specific software environment usually not provided at opportunistic resources. Furthermore, aspects to consider are limitations in network bandwidth causing I/O-intensive workflows to run inefficiently.
The key component to dynamically run HEP applications on opportunistic resources is the utilization of modern container and virtualization technologies. Based on these technologies, the Karlsruhe Institute of Technology (KIT) has developed ROCED, a resource manager to dynamically integrate and manage a variety of opportunistic resources. In combination with ROCED, HTCondor batch system acts as a powerful single entry point to all available computing resources, leading to a seamless and transparent integration of opportunistic resources into HEP computing.
KIT is currently improving the resource management and job scheduling by focusing on I/O requirements of individual workflows, available network bandwidth as well as scalability. For these reasons, we are currently developing a new resource manager, called TARDIS. In this paper, we give an overview of the utilized technologies, the dynamic management, and integration of resources as well as the status of the I/O-based resource and job scheduling.
Reference16 articles.
1. Sfiligoi I. et al.
The Pilot Way to Grid Resources Using glideinWMS, WRI World Congress on Computer Science and Information Engineering
(2009)
DOI:10.1109/CSIE.2009.950
2. Eck C et al.,
LHC computing Grid : Technical Design Report , Technical Design Report LCG,
https://cds.cern.ch/record/840543
(2005)
3. glideinWMS project [software],
http://doi.org/10.5281/zenodo.1309679
4. Nilsson P
et al.
The PanDA System in the ATLAS Experiment, Proceedings of XII Advanced Computing and Analysis Techniques in Physics Research
(2008)
5. Barthel R.and
Raffeiner S.
ForHLR: a New Tier-2 High-Performance Computing System for Research, Proceedings of the 3rd bwHPC-Symposium, Universitätsbibliothek Heidelberg, Heidelberg, 2017, 73–75
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献