Affiliation:
1. Stanford University (Corresponding author)
2. Stanford University
Abstract
Summary
Traditional closed-loop reservoir management (CLRM) entails the repeated application of history matching (based on newly observed data) followed by optimization of well settings. Existing treatments can provide well settings that fluctuate substantially between control steps, which may not be acceptable in practice. Another concern is that the project life (i.e., the time frame for the optimization) is often specified somewhat arbitrarily. In this work, we incorporate treatments for these important issues into a recently developed control-policy-based CLRM framework. This framework uses deep reinforcement learning (DRL) to train control policies that directly map observed well data to optimal well settings. Here, we introduce a procedure in which we train control policies, using DRL, to find optimal well bottomhole pressures (BHPs) for prescribed relative changes between control steps, with the project life also treated as an optimization variable. The goal of the optimizations is to maximize net present value (NPV), with project life determined such that a minimum acceptable rate of return (MARR) is achieved. We apply the framework to waterflooding cases involving 2D and 3D geological models. In the 3D case, realizations are drawn from multiple geological scenarios. Solutions from the control-policy approach are shown to be comparable, in terms of NPV, to those from deterministic realization-by-realization optimization and clearly superior to results from robust optimization over prior models. These observations hold for a range of specified MARR and relative-change values. The optimal well settings provided by the control policy display gradual ramping, consistent with operational requirements.
Publisher
Society of Petroleum Engineers (SPE)
Subject
Geotechnical Engineering and Engineering Geology,Energy Engineering and Power Technology
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献