Author:
Wang Chong Ming,Wang Xing Jian,Chen Yang,Wen Xue Mei,Zhang Yong Heng,Li Qing Wu
Abstract
Deep learning has been widely used in various fields and showed promise in recent years. Therefore, deep learning is the future trend to realize seismic data’s intelligent and automatic interpretation. However, traditional deep learning only uses labeled data to train the model, and thus, does not utilize a large amount of unlabeled data. Self-supervised learning, widely used in Natural Language Processing (NLP) and computer vision, is an effective method of learning information from unlabeled data. Thus, a pretext task is designed with reference to Masked Autoencoders (MAE) to realize self-supervised pre-training of unlabeled seismic data. After pre-training, we fine-tune the model to the downstream task. Experiments show that the model can effectively extract information from unlabeled data through the pretext task, and the pre-trained model has better performance in downstream tasks.
Subject
General Earth and Planetary Sciences
Reference27 articles.
1. MLReal: Bridging the gap between training on synthetic data and real data applications in machine learning;Alkhalifah,2021
2. Learning representations by maximizing mutual information across views;Bachman;Adv. Neural Inf. Process. Syst.,2019
3. Transfer learning for self-supervised, blind-spot seismic denoising
BirnieC.
AlkhalifahT.
2022
4. The potential of self-supervised networks for random noise suppression in seismic data;Birnie;Artif. Intell. Geosciences,2021
5. Language models are few-shot learners;Brown;Adv. Neural Inf. Process. Syst.,2020
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献