Abstract
Abstract
When designing a fusion power plant, many first-of-a-kind components are required. This presents a large potential design space across as many dimensions as the component’s parameters. In addition, multiphysics, multiscale, high-fidelity simulations are required to reliably capture a component’s performance under given boundary conditions. Even with high performance computing (HPC) resources, it is not possible to fully explore a component’s design space. Thus, effective interpolation between data points via machine learning (ML) techniques is essential. With sequential learning engineering optimisation, ML techniques inform the selection of simulation parameters which give the highest expected improvement for the model: balancing exploitation of the current best design with exploration of uncertain areas in the design space. In this paper, the application of an ML-driven design of experiment procedure for the sequential learning engineering design optimisation of a fusion component is shown. A parameterised divertor monoblock is taken as a typical example of a fusion component requiring HPC simulation to model. The component’s geometry is then optimised using Bayesian optimisation, seeking the design which minimises the stress experienced by the component under operational conditions.
Subject
Condensed Matter Physics,Nuclear Energy and Engineering