TransInver: 3D data-driven seismic inversion based on self-attention
-
Published:2023-11-06
Issue:1
Volume:89
Page:WA127-WA141
-
ISSN:0016-8033
-
Container-title:GEOPHYSICS
-
language:en
-
Short-container-title:GEOPHYSICS
Author:
Li Kewen1, Dou Yimin2ORCID, Xiao Yuan1, Jing Ruilin3, Zhu Jianbing3, Ma Chengjie3
Affiliation:
1. China University of Petroleum (East China) Qingdao, College of Computer Science and Technology, Qingdao, China. 2. China University of Petroleum (East China) Qingdao, College of Computer Science and Technology, Qingdao, China. (corresponding author) 3. Shengli Oilfield Company, SINOPEC, Dongying, China.
Abstract
Recently, convolutional neural network (CNN)-based deep learning (DL) for impedance inversion has been extended to multiple dimensions. Training multidimensional DL inversion requires extracting supervised information from sparse 1D well-log labels. Fully convolutional networks rely on their parameter sharing mechanism and receptive fields to achieve this, but their perceptual range is limited, making it difficult to capture long-term correlations in seismic data. The transformer is a type of network that is entirely based on self-attention, and it has demonstrated remarkable performance across various tasks and domains. However, its suitability for 3D seismic inversion is restricted by its high computational workload, fixed input size requirement, and inadequate handling of low-level details. The primary goal was to reengineer the self-attention mechanism to optimize its applicability for seismic impedance inversion tasks. The high-dimensional self-attention is decoupled into dual low-dimensional attention paths to reduce the computation of dense connections and matrix dot products. Shared parameters are used instead of full connection, allowing for flexible changes in input sizes by the network. In addition, its local modeling capabilities are enhanced by integrating it with the residual structure of the CNN. We name the resulting structure Self-Attention ResBlock, which is used as the basic unit for constructing TransInver. Comparative experiments indicate that TransInver performs significantly better than 3D methods such as UNet, TransUNet, HRNet, and 1D inversion methods. TransInver produces reliable inversion results using only nine well logs for SEAM Phase I and three well logs for the field data set of the Netherlands F3. This backbone network can deliver excellent inversion performance without depending on any auxiliary means such as low-frequency constraints or semisupervised frameworks.
Funder
China University of Petroleum (East China) Graduate Student Innovation Fund Program Natural Science Foundation of Shandong Province National Natural Science Foundation of China
Publisher
Society of Exploration Geophysicists
Subject
Geochemistry and Petrology,Geophysics
Reference43 articles.
1. Semi-supervised learning for acoustic impedance inversion 2. Prestack and poststack inversion using a physics-guided convolutional neural network 3. Seismic inversion for reservoir properties combining statistical rock physics and geostatistics: A review 4. Brown, T., B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, 2020, Language models are few-shot learners: Advances in Neural Information Processing Systems, 1877–1901.
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|