Abstract
Abstract
Well test analysis has been used for many years to assess well condition and obtain reservoir parameters. Early interpretation methods (using straight-lines or log-log pressure plots) were limited to the estimation of well performance. With the introduction of pressure derivative analysis in 1983 and the development of complex interpretation models that are able to account for detailed geological features, well test analysis has become a very powerful tool for reservoir characterization. A new milestone has been reached recently with the introduction of deconvolution. Deconvolution is a process which converts pressure data at variable rate into a single drawdown at constant rate, thus making more data available for interpretation than in the original data set, where only periods at constant rate can be analyzed. Consequently, it is possible to see boundaries in deconvolved data, a considerable advantage compared to conventional analysis, where boundaries are often not seen and must be inferred. This has a significant impact on the ability to certify reserves.
The paper reviews the evolution of analysis techniques over the last half-century and shows how improvements have come in a series of step changes twenty years apart. Each one has increased the ability to discriminate between potential interpretation models and to verify the consistency of the analysis. This has increasing drastically the amount of information that can be extracted from well test data and more importantly, the confidence in that information.
Introduction
Results that can be obtained from well testing are a function of the range and the quality of the pressure and rate data available, and of the approach used for their analysis. Consequently, at any given time, the extent and quality of an analysis (and therefore what can be expected from well test interpretation) are limited by the state-of-the-art in both data acquisition and analysis techniques. As data improve, and better interpretation methods are developed, more and more useful information can be extracted from well test data.
Early well test analysis techniques were developed independently from one another and often gave widely different results for the same tests1. This has had several consequences:an analysis was never complete, because there always was an alternative analysis method that had not been tried;interpreters had no basis on which to agree on analysis results; andthe general opinion was that well testing was useless, given the wide range of possible results.
Significant progress was achieved in the late 70's and early 80's with the development of an integrated methodology based on signal theory and the subsequent introduction of derivatives. It was found that, although reservoirs are all different in terms of depth, pressure, fluid composition, geology, etc., their behaviors in well tests were made of a few number of basic components that were the same everywhere, every time. Well test analysis was about finding these components, which could be achieved in a systematic way, following a well-defined process. The outcome was a well test interpretation model, which defined how much and what kind of knowledge could be extracted from the data. The interpretation model also determined which of the various published analysis methods were applicable and when. Importantly, the integrated methodology made well test analysis easy to learn and repeatable. The evolution of the state-of-the art in well test analysis throughout these years can be followed from review papers that have appeared at regular intervalsin the petroleum literature1–5.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献