Learning Composable Energy Surrogates for PDE Order Reduction

Authors: Alex Beatson, Jordan Ash, Geoffrey Roeder, Tianju Xue, Ryan P. Adams

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the ability of Composable Energy Surrogates (CES) to efficiently produce accurate solutions. (Section 8) Figure 3 shows our evaluation. Composed energy surrogates are more efficient than high-fidelity FEA simulations yet more accurate than low-fidelity FEA, occupying a new point on the Pareto frontier. (Section 8) Data collection has two phases. First, we collect training and validation datasets... (Section 6)
Researcher Affiliation Collaboration Alex Beatson Princeton University abeatson@princeton.edu Jordan T. Ash Microsoft Research NYC ash.jordan@microsoft.com Geoffrey Roeder Princeton University roeder@princeton.edu Tianju Xue Princeton University txue@princeton.edu Ryan P. Adams Princeton University rpa@princeton.edu
Pseudocode No The paper describes the model structure and training process in prose and mathematical equations but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper lists the software packages used (dolfin, FEniCS, dolfin-adjoint, PyTorch, Ray) and cites them, but it does not include a statement or link indicating that the authors' own implementation code for the Composable Energy Surrogates (CES) is open-source or publicly available.
Open Datasets No The paper states, "We sample 55000 training examples and 5000 validation examples altogether." (Section 6). However, it does not provide concrete access information (link, DOI, repository, or citation to an established public dataset) for the dataset it generated.
Dataset Splits Yes We sample 55000 training examples and 5000 validation examples altogether. (Section 6) The paper also evaluates performance in Section 8 using metrics like error, implying a test phase.
Hardware Specification Yes The initial dataset is collected using 80 M4.xlarge CPU spot workers. While training the surrogate, we use a GPU P3.large driver node to train the model, and 80 M4.xlarge CPU spot worker nodes performing DAGGER in parallel. (Section 7) Measurements are taken on an AWS M4.xlarge EC2 CPU instance. (Section 8)
Software Dependencies No The paper mentions software such as "dolfin", "FEni CS", "dolfin-adjoint", "Py Torch", and "Ray" with citations to their respective papers. However, it does not explicitly provide specific version numbers for these software packages within the text.
Experiment Setup Yes We use Py Torch s L-BFGS routine to minimize the composed surrogate energy, with step size 0.25 and default criteria for checking convergence. (Section 8) We attempt to solve each finite element model with FEni CS Newton method with [1, 2, 5, 10, 20] load steps and relaxation parameters [0.9, 0.7, 0.4, 0.1, 0.05]. (Section 8) We find that combining load stepping with a relaxed Newton s method is more efficient than using either alone. Except where specified, we linearly anneal from rest to u over 10 load steps and use a relaxation parameter λ = 0.1. (Section 6) For all experiments we use N = 10 control points along each edge, resulting in u R72. (Section 5)