Author:
Kim Bockjoo,Bourilkov Dimitri
Abstract
Modern distributed computing systems produce large amounts of monitoring data. For these systems to operate smoothly, underperforming or failing components must be identified quickly, and preferably automatically, enabling the system managers to react accordingly. In this contribution, we analyze jobs and transfer data collected in the running of the LHC computing infrastructure. The monitoring data is harvested from the Elasticsearch database and converted to formats suitable for further processing. Based on various machine and deep learning techniques, we develop automatic tools for continuous monitoring of the health of the underlying systems. Our initial implementation is based on publicly available deep learning tools, PyTorch or TensorFlow packages, running on state-of-the-art GPU systems.
Reference12 articles.
1. LHC Machine
2. The CMS Collaboration, et. al., The CMS experiment at the CERN LHC, JINST 3 S08004(2008).
3. Dorigo A., Elmer P., Furano F., and Hanushevsky A., Xrootd - A highly scalable architecture for data access, WSEAS Transactions on Computers (2005).
4. Barisits M., et.al., Rucio: Scientifc Data Management, Computing and Software for Big Science (2019) 3:11.
5. Weitzel D., et. al., XRootD Monitoring Collector, https://zenodo.org/record/4670589