Abstract
Hyper-Kamiokande is a next-generation multi-purpose neutrino experiment with a primary focus on constraining CP-violation in the lepton sector. It features a diverse science programme that includes neutrino oscillation studies, astrophysics, neutrino cross-section measurements, and searches for physics beyond the standard model, such as proton decay. Building on its predecessor, Super-Kamiokande, the Hyper-Kamiokande far detector has a total volume approximately 5 times larger and is estimated to collect nearly 2 PB of data per year. The experiment will also include both onand off-axis near detectors, including an Intermediate Water Cherenkov Detector. To manage the significant demands relating to the data from these detectors, and the associated Monte Carlo simulations for a range of physics studies, an efficient and scalable distributed computing model is essential. This model leverages Worldwide LHC Grid computing infrastructure and utilises the GridPP DIRAC instance for both workload management and for file cataloguing. In this report we forecast the computing requirements for the Hyper-K experiment, estimated to reach around 35 PB (per replica) and 8,700 CPU cores (~100,000 HS06) by 2036. We outline the resources, tools, and workflow in place to satisfy this demand.
Reference18 articles.
1. Hyper-Kamiokande Collaboration, arXiv:1805.04163 [physics.ins-det] (2018)
2. Hyper-Kamiokande Collaboration, arXiv:2009.00794 [physics.ins-det] (2021)
3. The T2K experiment
4. Constraint on the matter–antimatter symmetry-violating phase in neutrino oscillations
5. The Super-Kamiokande detector