Affiliation:
1. University of Alberta, Department of Physics, Edmonton, Alberta T6G 2E1, Canada..
Abstract
Accounting for the Hessian in full-waveform inversion (FWI) can lead to higher convergence rates, improved resolution, and better mitigation of parameter trade-off in multiparameter problems. In spite of these advantages, the adoption of second-order optimization methods (e.g., truncated Newton [TN]) has been precluded by their high computational cost. We propose a subsampled TN (STN) algorithm for time-domain FWI with applications presented for the elastic isotropic case. By using uniform or nonuniform source subsampling during the computation of Hessian-vector products, we reduce the number of partial differential equation solves required per iteration when compared to the conventional TN algorithm. We evaluate the performance of STN through synthetic inversions on the Marmousi II and BP 2.5D models, using the limited-memory Broyden–Fletcher–Goldfarb–Shanno and TN algorithms as benchmarks. We determine that STN reaches a target misfit reduction at an overall cost comparable to first-order gradient methods, while retaining favorable convergence properties of TN methods. Furthermore, we evaluate an example in which nonuniform sampling outperforms uniform sampling in STN due to highly nonuniform source contributions to the Hessian.
Publisher
Society of Exploration Geophysicists
Subject
Geochemistry and Petrology,Geophysics
Reference46 articles.
1. Akcelik, V., G. Biros, and O. Ghattas, 2002, Parallel multiscale Gauss-Newton-Krylov methods for inverse wave propagation: SC ‘02: Proceedings of the ACM/IEEE Conference on Supercomputing, 41–41.
2. Comparison of multifrequency selection strategies for simultaneous-source full-waveform inversion
3. Bollapragada, R., R. Byrd, and J. Nocedal, 2016, Exact and inexact subsampled Newton methods for optimization: arXiv e-prints, arXiv:1609.08502.
4. Bottou, L., 2010, Large-scale machine learning with stochastic gradient descent: Proceedings of the COMPSTAT’2010, Physica-Verlag HD, 177–186.
5. Optimization Methods for Large-Scale Machine Learning
Cited by
17 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献