Affiliation:
1. Computer Modelling Group
Abstract
Summary.
The major problem in phase-behavior matching with a cubic equation of state (EOS) is the selection of regression parameters. Many parameters can be selected as the best set of parameters; therefore, a dynamic parameter-solution scheme is desired to avoid tedious and time-consuming trial-and-error regression runs. This paper proposes a regression technique in which the most significant parameters are selected from a large set of parameters during the regression process. This technique reduces the regression effort considerably and alleviates the problem associated with a priori selection of regression parameters. The technique's success is demonstrated by matching experimental data for a light oil and a gas condensate.
Introduction
Cubic EOS's generally do not predict laboratory data of oil/gas mixtures accurately without tuning of the EOS parameters. The practice often has been to adjust the properties of the components (usually the heavy fractions)-e.g., p, T, and w-to fit the experimental data.
The objective function in the regression involves the solution of complex nonlinear equations, such as flash and saturation-pressure calculations. A robust minimization method is therefore required for rapid convergence to the minimum. In this paper, a modification of Dennis et al.'s adaptive least-squares algorithm is used. The modification involves the use of some other nonlinear optimization concepts on direction and step-size selection.
Dynamic selection of the most meaningful regression parameters from a larger set of variables is described. This feature is extremely useful in EOS fitting because it alleviates the problem of deciding a priori the best regression variables, which is extremely difficult.
It should be stressed that the regression procedure win not correct the deficiencies of the EOS used and that the EOS predictive capability depends entirely on the type and accuracy of the data used in the regression. For predictive purposes, attempts should be made to ensure that the tuned parameters are within reasonable physical limits.
All calculations in this paper are performed with the Peng-Robinson EOS, although the scheme is general and can be applied to any EOS.
Regression Method
The implementation of the dynamic-parameter-selection strategy for tuning the EOS is a nonlinear optimization problem. The goal is to minimize a weighted sum of squares
(1)
where × = (x, × ... ×) = vector of n regression parameters and n = number of measurements to be fitted. Usually, n greater than n . The elements of r(x) denoted by r (x) are nonlinear functions of ×; i.e.,
(2)
where e (x)=values calculated with the EOS, e =corresponding experimental measurements, and w =weight associated with the ith experimental data point. Note that the differences between the experimental and calculated values are normalized by dividing by e in Eq. 2. This brings the magnitudes of the r to comparable values.
The minimization of (x) may be solved by various methods for nonlinear parameter estimation and for nonlinear optimization. The general-purpose optimization methods, however, do not take advantage of the structure of the nonlinear least-squares problem. Several strategies are available that exploit that structure. Coats and Smart used a modified linear programming least-squares algorithm, while Watson and Lee used a modification of the Levenberg-Marquardt algorithms to solve the nonlinear least-squares problem. In this paper, a modification of the adaptive least-squares algorithm of Dennis et al. is used. The present method departs from Dennis et al.'s approach in the use of different non-linear optimization concepts for selecting the search direction and the step size. Details of the regression method are described in the Appendix.
Calculation of Jacobian
It was found early in the investigation that the key to an efficient algorithm would be a fast and accurate estimation of the Jacobian matrix, J = delta r/delta x. In the Appendix, it is shown that matrix J is also used to determine the Hessian matrix, f.
The Jacobian J is obtained through numerical differentiation. Although the EOS can be differentiated analytically with respect to the regression parameters ×, the residuals, r, are extremely complex functions of these parameters (e.g., the GOR of the last step in a differential liberation) and are more suitable to numerical differentiation.
The calculations of r also involve the iterative solution of non-linear systems of equations (e.g., flash and saturation-pressure calculations) where the results are available only to some accuracy (i. Thus, in the calculation of J through numerical differentiation, the perturbations of the independent variables × must not be masked by the convergence tolerances epsilon or the round-off errors associated with the calculations of r . It has been found that a perturbation of 1% of the independent variables × is adequate to compute J through numerical differentiation.
Selection of Regression Parameters
Given a global set of np regression parameters, the method selects an active subset of n parameters with which regression will be performed. The global set of regression parameters is supplied by the user and includes all or some of the following: P, T, V, V, d, and 0.
The interaction coefficients between hydrocarbons are estimated from the following equation :(3)
The volume translation techniques of Peneloux et al. are used to correct the molar volume as follows:
(4)
where V=molar volume from the EOS, V =corrected molar volume, and y = mole fraction of Component i. Eq. 4 has been proved to improve density predictions of the EOS.
SPERE
P. 115^
Publisher
Society of Petroleum Engineers (SPE)
Subject
Process Chemistry and Technology
Cited by
23 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献