Efficient Parallel Computing Using Multiscale Multimesh Reservoir Simulation

Author:

Atan Safian1,Kazemi Hossein2,Caldwell Donald H.1

Affiliation:

1. Marathon Oil Company

2. Colorado School of Mines

Abstract

Abstract The multiscale, multimesh flow simulators have been designed for the sole purpose of running very large, heterogeneous reservoir, flow problems on massively parallel computers. This paper shows the flow simulation results and the corresponding CPU times. The multiscale flow simulator is written in Fortran 90/95 with OpenMP directives and compiled on high-performance SMP computers. The simulations were performed for several highly heterogeneous, channelized, reservoir cases with realistic rock-fluid interaction (viscous, capillarity, gravity, and compressibility) to evaluate the efficacy of the multiscale, multimesh simulation in parallel computing. It is shown that the aforementioned multiscale technique reduces computing time by several orders of magnitude, while maintaining the same accuracy as the conventional fine scale simulation. Introduction In simulation of displacement processes in large heterogeneous reservoirs, the computing time is both time-consuming and expensive. Therefore there has been a tendency to upscale fine-grid models to reduce the required CPU time. The problem with upscaling is that it often creates inaccuracy in mathematical results (e.g. large numerical dispersion). Upscaling also cannot capture the architecture of the flow channels effectively. Thus, the channeling effects are suppressed. Finally the upscaling algorithm usually does not have a solid physical foundation. For instance, permeability upscaling has been handled through a logical flow averaging technique; however, upscaling of relative permeability curves has not been very well developed1–4. As a consequence, to minimize the upscaling issues, we have resorted to a multimesh, multiscale computing methodology5–6 to preserve the reservoir flow and transport characteristics at the very fine-level, while we reduced the inherent computing time by several orders of magnitude. The multiscale computation was reported by several authors previously.7–17 We also presented an extension of the multiscale method for both single- and dual-porosity reservoirs in a previous meeting.5–6 Since then, we have been able to improve our computing methodology, which is the subject of this paper. The multiscale, multimesh simulator was compiled for a 64-bit, SGI-ALTIX with 256 1.5 GHz Itanium2 CPUs. However, for the purpose of this study, we limit our usage with a maximum of 32-CPUs. Computing Methodology We solve the steady-state pressure equation on the global fine-grid mesh, to obtain the flux distribution at the coarse-grid boundaries. These flux distributions are used as the weighting function for the local pressure update instead of the transmissibility weighting used in our previous.5–6 We also use the above steady-state fine-grid flux distribution at the boundaries of the coarse-grid to calculate the effective permeability tensor of the coarse-grid. This upscaling approach is different than the classical flow-based permeability upscaling which is based on constant pressure at the boundaries. The latter approach is also equivalent to having a fixed pressure gradient across the coarse-grid domain. The computation sequence:Obtain global fine-scale steady-state pressure solution to calculate the fine-scale flux-weights at the boundaries of each coarse-scale nodes. This information will be used to calculate the fine-scale fluxes within each each coarse-scale nodes. For computational efficiency, for very large grid systems, we use block Jacobi iteration algorithm. For parallel processing purposes, block Jacobi iteration can be done by a red-black ordering scheme.Obtain global unsteady-state coarse-scale pressure solution at large time-step, ?t1, to calculate the coarse-scale fluxes.Calculate the the unsteady-state fine-scale fluxes at the coarse grid boundaries using the coarse-scale fluxes and weighted by Step a.Calculate the fine-scale pressures and internal interface fine-scale boundaries within each coarse-scale gridblocks using the boundary conditions obtained in Step c.Calculate fine-grid saturations using smaller time-steps, ?t2, constrained by the CFL criterion for the IMPES or sequential approach.

Publisher

SPE

Cited by 2 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Development of a framework for parallel reservoir simulation;The International Journal of High Performance Computing Applications;2018-08-29

2. References and Bibliography;Advanced Petroleum Reservoir Simulation;2016-08-08

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3