Amortized Finite Element Analysis for Fast PDE-Constrained Optimization

Authors: Tianju Xue, Alex Beatson, Sigrid Adriaenssens, Ryan Adams

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments show that our method outperforms the traditional adjoint method on a per-iteration basis.
Researcher Affiliation Academia 1Department of Civil and Environmental Engineering, Princeton University, Princeton, NJ, USA 2Department of Computer Science, Princeton University, Princeton, NJ, USA.
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes We share and publish our code at https://github.com/tianjuxue/Amor FEA.
Open Datasets No To construct the training and testing data from this distribution, 30, 000 source terms were generated. Compared with supervised data generated by expensive FEA simulations, our data are almost free to obtain.
Dataset Splits No The paper mentions a '90/10 train-test split' but does not specify a separate validation set split.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments were mentioned in the paper.
Software Dependencies No FEA simulations are carried out using an open source Python package FEni CS (Logg et al., 2012). Neural network training is performed in Py Torch (Paszke et al., 2019). While software is mentioned, specific version numbers (e.g., FEniCS 2019.1) are not provided, only the publication year of the reference.
Experiment Setup Yes We use a MLP with scaled exponential linear units (SELUs) for the activation functions (Klambauer et al., 2017). We perform a 90/10 train-test split for our data. The gradient descent step size is set to be consistent within each case, but different across the four cases: (10 2, 10 2, 2 10 3, 2 10 3).