Learning Controllable Adaptive Simulation for Multi-resolution Physics
Authors: Tailin Wu, Takashi Maruyama, Qingqing Zhao, Gordon Wetzstein, Jure Leskovec
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method in a 1D benchmark of nonlinear PDEs and a challenging 2D mesh-based simulation. We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error: it achieves an average of 33.7% error reduction for 1D nonlinear PDEs, and outperforms Mesh Graph Nets + classical Adaptive Mesh Refinement (AMR) in 2D mesh-based simulations. |
| Researcher Affiliation | Collaboration | 1Stanford University, 2NEC Corporation |
| Pseudocode | No | The paper describes the model architecture and learning process through text and equations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Project website with data and code can be found at: http://snap.stanford.edu/lamp. |
| Open Datasets | Yes | We evaluate our LAMP on two challenging datasets: (1) a 1D benchmark nonlinear PDEs , which tests generalization of PDEs in the same family (Brandstetter et al., 2022); (2) a mesh-based paper simulation generated by the Arc Sim solver (Narain et al., 2012). Project website with data and code can be found at: http://snap.stanford.edu/lamp. |
| Dataset Splits | No | The paper mentions '1000 trajectories for training data and use 50 trajectories as test data' for the 2D dataset, but it does not specify a distinct validation set or its size/split percentage. |
| Hardware Specification | Yes | We train all our models on an NVIDIA A100 80GB GPU. |
| Software Dependencies | No | The paper mentions specific activations (SiLU, ELU) and base architectures (Mesh Graph Nets) but does not provide version numbers for general software dependencies like Python, PyTorch, TensorFlow, or other relevant libraries. |
| Experiment Setup | Yes | The detailed hyperparameter table for 1D and 2D experiments is provided in Table 3. Table 3 lists various hyperparameters and their values, such as 'Batch size 128', 'Evolution model learning rate for pre-training 10^-3', 'Optimizer Adam', etc. |