Abstract
AbstractI consider the problem of model diagnostics, that is, the problem of criticizing a model prior to history matching by comparing data to an ensemble of simulated data based on the prior model (prior predictions). If the data are not deemed as a credible prior prediction by the model diagnostics, some settings of the model should be changed before history matching is attempted. I particularly target methodologies that are computationally feasible for large models with large amounts of data. A multiscale methodology, that can be applied to analyze differences between data and prior predictions in a scale-by-scale fashion, is proposed for this purpose. The methodology is computationally inexpensive, straightforward to apply, and can handle correlated observation errors without making approximations. The multiscale methodology is tested on a set of toy models, on two simplistic reservoir models with synthetic data, and on real data and prior predictions from the Norne field. The tests include comparisons with a previously published method (termed the Mahalanobis methodology in this paper). For the Norne case, both methodologies led to the same decisions regarding whether to accept or discard the data as a credible prior prediction. The multiscale methodology led to correct decisions for the toy models and the simplistic reservoir models. For these models, the Mahalanobis methodology either led to incorrect decisions, and/or was unstable with respect to selection of the ensemble of prior predictions.
Funder
NORCE Norwegian Research Centre AS
Publisher
Springer Science and Business Media LLC