Random Grid Neural Processes for Parametric Partial Differential Equations

Authors: Arnaud Vadeboncoeur, Ieva Kazlauskaite, Yanni Papandreou, Fehmi Cirak, Mark Girolami, Omer Deniz Akyildiz

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed method is tested on a nonlinear Poisson problem, Burgers equation, and Navier-Stokes equations, and we provide extensive numerical comparisons. We demonstrate significant computational advantages over current physics informed neural learning methods for parametric PDEs while improving the predictive capabilities and flexibility of these models.
Researcher Affiliation Academia 1Department of Engineering, University of Cambridge, Trumpington St, Cambridge CB2 1PZ. 2Department of Mathematics, Imperial College London, Exhibition Rd, South Kensington, London SW7 2AZ, United Kingdom. 3The Alan Turing Institute, British Library, 96 Euston Rd, London NW1 2DB, United Kingdom.
Pseudocode Yes Algorithm 1 Pseudocode for RGNP
Open Source Code No The paper does not contain any explicit statement about releasing source code for the described methodology or a link to a code repository.
Open Datasets No Methods other than ours rely on creating a dataset of 1k input pairs of z, w variables for the Poisson problem, and 10k samples for the Burgers example. ... We use 1k noisy sample solutions measured at 60 locations with a noise standard deviation σn = 0.05. No concrete access information for a publicly available dataset.
Dataset Splits No The paper mentions '1000 independent samples of z, w drawn from their priors' for testing and '100 samples in the Navier-Stokes examples', but does not explicitly describe validation splits in terms of percentages or counts.
Hardware Specification Yes All experiments were run on an AMD Ryzen 9 5950X CPU (16 cores, 32 virtual) with 128GB memory and a Nvidia RTX 3090 (24GB VRAM) GPU.
Software Dependencies No All experiments were run using TensorFlow. All experiments were conducted with swish activation functions (Ramachandran et al., 2017). The chosen testing metrics... solved using FEni CS (Logg et al., 2012). No specific version numbers provided for TensorFlow, FEni CS, or other libraries.
Experiment Setup Yes Architecture Details: number of hidden layers: 7, number of neurons / hidden layer: 300, activation: swish, GICNet channel dimension: 20, GICNet point/dim lattice: 20 (for Nonlinear Poisson). We train the Poisson problem for 20k gradient updates and we train the Burgers setup for 80k gradient update steps. All learning is done using the Adam optimizer (Kingma et al., 2015) with a decaying learning rate.