Abstract
Efficient handling of large data-volumes becomes a necessity in today’s world. It is driven by the desire to get more insight from the data and to gain a better understanding of user trends which can be transformed into economic incentives (profits, cost-reduction, various optimization of data workflows, and pipelines). In this paper, we discuss how modern technologies are transforming well established patterns in HEP communities. The new data insight can be achieved by embracing Big Data tools for a variety of use cases, from analytics and monitoring to training Machine Learning models on a terabyte scale. We provide concrete examples within the context of the CMS experiment where Big Data tools are already playing or would play a significant role in daily operations
Reference23 articles.
1. Alves A. A., et al.,
A Roadmap for HEP Software and Computing R&D for the 2020s,
https://arxiv.org/abs/1712.06982
2. Gutche O., et al.,
Big Data in HEP: A comprehensive use case study,
https://arxiv.org/abs/1703.04171
3. Codd E.F.,
A Relational Model of Data for Large Shared Data Banks,
Communicationsof the ACM.
13
(6)
: 377–387. doi:10.1145/362384.362685.
4. Kuznetsov V.,
Evans D.,
Metson S.,
The CMS Data Aggregation System, doi:10.1016/j.procs.2010.04.172
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献