Distributed Data Collection for the Next Generation ATLAS EventIndex Project
-
Published:2019
Issue:
Volume:214
Page:04010
-
ISSN:2100-014X
-
Container-title:EPJ Web of Conferences
-
language:
-
Short-container-title:EPJ Web Conf.
Author:
Fernández Casaní Álvaro,Barberis Dario,Sánchez Javier,García Montoro Carlos,González de la Hoz Santiago,Salt Jose
Abstract
The ATLAS EventIndex currently runs in production in order to build a complete catalogue of events for experiments with large amounts of data. The current approach is to index all final produced data files at CERN Tier0, and at hundreds of grid sites, with a distributed data collection architecture using Object Stores to temporarily maintain the conveyed information, with references to them sent with a Messaging System. The final backend of all the indexed data is a central Hadoop infrastructure at CERN; an Oracle relational database is used for faster access to a subset of this information. In the future of ATLAS, instead of files, the event should be the atomic information unit for metadata, in order to accommodate future data processing and storage technologies. Files will no longer be static quantities, possibly dynamically aggregating data, and also allowing event-level granularity processing in heavily parallel computing environments. It also simplifies the handling of loss and or extension of data. In this sense the EventIndex may evolve towards a generalized whiteboard, with the ability to build collections and virtual datasets for end users. This proceedings describes the current Distributed Data Collection Architecture of the ATLAS EventIndex project, with details of the Producer, Consumer and Supervisor entities, and the protocol and information temporarily stored in the ObjectStore. It also shows the data flow rates and performance achieved since the new Object Store as temporary store approach was put in production in July 2017. We review the challenges imposed by the expected increasing rates that will reach 35 billion new real events per year in Run 3, and 100 billion new real events per year in Run 4. For simulated events the numbers are even higher, with 100 billion events/year in run 3, and 300 billion events/year in run 4. We also outline the challenges we face in order to accommodate future use cases in the EventIndex.
Reference7 articles.
1. Weil S.A.,
Brandt S.A.,
Miller E.L.,
Long D.D.E.,
Maltzahn C.,
Ceph: A Scalable, High-performance Distributed File System,
in Proceedings of the 7th Symposium on Operating Systems Design and Implementation
(USENIX Association, Berkeley, CA, USA, 2006), OSDI ’06, pp. 307–320, ISBN 1-931971-47-1,
http://dl.acm.org/citation.cfm?id=1298455.1298485 2. Lipcon T.,
Alves D.,
Burkert D.,
Cryans J.D.,
Dembo A.,
Percy M.,
Rus S.,
Wang D.,
Bertozzi M.,
McCabe C.P. et al., Kudu: storage for fast analytics on fast data
(2015) 3. Baranowski Z.,
Barberis D.,
Canali L.,
Casani A. Fernandez,
Gallas E.,
Mon-toro C. Garcia,
de la Hoz S. Gonzalez,
Hrivnac J.,
Prokoshin F.,
Rybkin G. et al,
(ATLAS collaboration),
A prototype for the evolution of the ATLAS EventIndex based on Apache Kudu storage, in Proceedings of the 23rd International Conference on Computing in High En-ergy and Nuclear Physics
(EDP Sciences, Les Ulis, France, 2018)
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|