Variational Autoencoding Neural Operators

Authors: Jacob H Seidman, Georgios Kissas, George J. Pappas, Paris Perdikaris

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test VANO with different model set-ups and architecture choices for a variety of benchmarks. We start from a simple Gaussian random field where we can analytically track what the model learns and progressively transition to more challenging benchmarks including modeling phase separation in Cahn-Hilliard systems and real world satellite data for measuring Earth surface deformation.
Researcher Affiliation Academia 1Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, USA 2Department of Mechanical Engineering and Applied Mechanics, University of Pennsylvania, Philadelphia, USA.
Pseudocode No The paper does not include any explicitly labeled pseudocode or algorithm blocks. Methods are described in prose.
Open Source Code No The paper does not contain any explicit statement that the authors are releasing the source code for their own method (VANO). It only provides a link to the GANO paper's official repository, which is a competing baseline.
Open Datasets Yes We start from a simple Gaussian random field where we can analytically track what the model learns and progressively transition to more challenging benchmarks including modeling phase separation in Cahn-Hilliard systems and real world satellite data for measuring Earth surface deformation.
Dataset Splits No The paper mentions 'Ntrain = 2048 functions' for training and 'Ntest = 2048 functions' for testing in the Gaussian Random Field and 2D Gaussian Densities experiments, but it does not specify a separate validation dataset split with percentages or sample counts.
Hardware Specification Yes We present the wall clock time in seconds that is needed to train each model on a single NVIDIA RTX A6000 GPU.
Software Dependencies No The paper mentions software like JAX, Matplotlib, Pytorch, and NumPy along with citations to their original papers, but it does not specify exact version numbers for these software dependencies, which are required for full reproducibility.
Experiment Setup Yes We train the model using the Adam optimizer (Kingma & Ba, 2014) with random weight factorization (Wang et al., 2022b) for 40, 000 training iterations with a batch size of 32 and a starting learning rate of 10 3 with exponential decay of 0.9 every 1,000 steps.