Affiliation:
1. King Abdullah University of Science and Technology, Thuwal, Saudi Arabia. (corresponding author)
2. King Abdullah University of Science and Technology, Thuwal, Saudi Arabia.
Abstract
StorSeismic is a recently introduced model based on the transformer network to adapt to various seismic processing tasks through its pretraining and fine-tuning strategy. In the original implementation, StorSeismic uses a sinusoidal positional encoding (PE) and a conventional self-attention mechanism, borrowed from natural language processing applications. For seismic processing, they provide good results but also indicate limitations in efficiency and expressiveness. We develop modifications to these two key components, by using relative PE and low-rank attention matrices as replacements for the standard ones. Our changes are tested on processing tasks applied to realistic Marmousi and offshore field data as a sequential strategy, starting from denoising, direct-arrival removal, multiple attenuation, and finally root-mean-squared velocity ([Formula: see text]) prediction for normal moveout correction. We observe faster pretraining and competitive results on the fine-tuning tasks and, in addition, fewer parameters to train compared with the standard model.
Publisher
Society of Exploration Geophysicists