Auto-Linear Phenomenon in Subsurface Imaging

Authors: Yinan Feng, Yinpeng Chen, Peng Jin, Shihang Feng, Youzuo Lin

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5. Experiments We evaluate our approach on Open FWI (Deng et al., 2022), the first and only large-scale collection of openly accessible multi-structural seismic FWI datasets with benchmarks. We compare our method with the state-of-the-art works, including Inversion Net (Wu & Lin, 2019), i.e., the method that jointly trains the encoder and decoder, and Inv LINT (Feng et al., 2022), i.e., the method that separates the encoder and decoder. We also evaluate Auto-Linear’s generalizability for other imaging and PDE tasks. In particular, we test it on the electromagnetic (EM) inversion task controlled by Maxwell’s equations.
Researcher Affiliation Collaboration 1Department of Computer Science, The University of North Carolina at Chapel Hill,USA 2Google Research, USA 3College of Information Sciences and Technology, The Pennsylvania State University, USA 4Earth and Environmental Sciences Division, Los Alamos National Laboratory,USA 5School of Data Science and Society, The University of North Carolina at Chapel Hill,USA.
Pseudocode No The paper describes its methodology using mathematical formulations and descriptive text, but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing its source code or a link to a code repository for the described methodology.
Open Datasets Yes We verify our method on Open FWI (Deng et al., 2022), the first open-source collection of large-scale, multi-structural benchmark datasets for data-driven seismic FWI.
Dataset Splits Yes For both datasets, we allocated 80% of the data for training the linear converter, with the remaining 20% used for validation.
Hardware Specification Yes We implement our models in Pytorch and train them on 1 NVIDIA Tesla V100 GPU.
Software Dependencies No The paper mentions using 'Pytorch' for implementation and 'Adam W' optimizer, but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We employ Adam W (...) optimizer with momentum parameters β1 = 0.9, β2 = 0.999 and a weight decay of 0.05 for both self-supervision and supervision steps. (...) we change the batch size to 512 and remove the pixel normalization. (...) the initial learning rate is set to be 1 × 10−3, and decayed with a cosine annealing (...). The batch size is set to 256. To make a fair comparison with the previous work, we use l1 loss to train the linear layer. (...) The rank of the linear converter is set to 128. The mask ratio for training MAE is set to 0.75.