Affiliation:
1. Lawrence Livermore National Laboratory Livermore California USA
Abstract
AbstractDeep neural network (DNN) surrogates of expensive physics simulations are enabling a rapid change in the way that common experimental design and analysis tasks are approached. Surrogate models allow simulations to be performed in parallel and separately from downstream tasks, thereby enabling analyses that would be impossible with the simulation in‐the‐loop; surrogates based on DNNs can effectively emulate diverse non‐scalar data of the types collected in fusion and laboratory‐astrophysics experiments. The challenge is in training the surrogate model, for which large ensembles of physics simulations must be run, preferably without wasting computational effort on uninteresting simulations. In this paper, we present an iterative sampling scheme that can preferentially propose simulations in interesting regions of parameter space without neglecting unexplored regions, allowing high‐quality and wide‐ranging surrogate models to be trained using 2–3 times fewer simulations compare to space‐filling designs. Our approach uses an explicit importance function defined on the simulation output space, balanced against a measure of simulation density which serves as a proxy for surrogate accuracy. It is easy to implement and can be tuned to find interesting simulations early in the study, allowing surrogates to be trained quickly and refined as new simulations become available; this represents an important step towards the routine generation of deep surrogate models quickly enough to be truly relevant to experimental work.
Funder
U.S. Department of Energy
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献