An Intriguing Property of Geophysics Inversion
Authors: Yinan Feng, Yinpeng Chen, Shihang Feng, Peng Jin, Zicheng Liu, Youzuo Lin
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that this interesting property holds for two geophysics inversion problems over four different datasets. Compared to much deeper Inversion Net (Wu & Lin, 2019), our method achieves comparable accuracy, but consumes significantly fewer parameters. |
| Researcher Affiliation | Collaboration | 1Earth and Environmental Sciences Division, Los Alamos National Laboratory,USA 2Microsoft Research, USA 3College of Information Sciences and Technology, The Pennsylvania State University, USA. Correspondence to: Youzuo Lin <ylin@lanl.gov>. |
| Pseudocode | No | The paper describes the methods in text and uses a schematic illustration (Figure 2) but does not provide formal pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link regarding the availability of open-source code for the described methodology. |
| Open Datasets | Yes | Kimberlina-Leakage: The geophysical properties were developed under DOE s National Risk Assessment Program (NRAP). It contains 991 CO2 leakage scenarios... (Jordan & Wagoner, 2017) Marmousi: We apply the generating method in Jin et al. (2022), which follows Feng et al. (2021) and adopts the Marmousi velocity map as the style image... Salt: The dataset contains 140 velocity maps (Yang & Ma, 2019). Kimberlina-Reservoir: The geophysical properties were also developed under DOE s NRAP... (Alumbaugh et al., 2021). |
| Dataset Splits | Yes | Marmousi: This dataset contains 30K with paired seismic data and velocity map. 24k samples are set as the training set, 3k samples are used as the validation set, and the rest are the testing set. Salt: The dataset contains 140 velocity maps (Yang & Ma, 2019). We downsample it to 40 60 with a grid size of 10 m, and the splitting strategy 120/10/10 is applied. |
| Hardware Specification | Yes | We implement our models in Pytorch and train them on 1 NVIDIA Tesla V100 GPU. When training on Marmousi dataset using 1 GPU (NVIDIA Quadro RTX 8000), our model is 9 times faster than Inversion Net/Velocity GAN (1 hour vs. 9 hours). We also tested inference runtime with batch size 1 on a single thread of an Intel(R) Xeon(R) CPU Gold 6248 v3 (2.5GHz). |
| Software Dependencies | No | The paper mentions implementing models in Pytorch but does not provide specific version numbers for Pytorch or any other software dependencies. |
| Experiment Setup | Yes | The input seismic data and EM data are normalized to the range [-1, 1]. When applying Ridge regression to solve the linear layer in the encoder, and set the regularization parameter α = 1. We employ Adam W (Loshchilov & Hutter, 2018) optimizer with momentum parameters β1 = 0.5, β2 = 0.999 and a weight decay of 1 10 4 to update decoder parameters of the network. The initial learning rate is set to be 1 10 3, and we decay the learning rate with a cosine annealing (Loshchilov & Hutter, 2016), where T0 = 5, Tmult = 2 and the minimum learning rate is set to be 1 10 3. The size of every mini-batch is set to be 128. |