Learning Mesh-Based Simulation with Graph Networks
Authors: Tobias Pfaff, Meire Fortunato, Alvaro Sanchez-Gonzalez, Peter Battaglia
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluated our method on a variety of systems with different underlying PDEs, including cloth, structural mechanics, incompressible and compressible fluids (Figure 2). Training and test data was produced by a different simulator for each domain. The simulation meshes range from regular to highly irregular: the edge lengths of dataset AIRFOIL range between 2 10 4m to 3.5m, and we also simulate meshes which dynamically change resolution over the course of a trajectory. Full details on the datasets can be found in Section A.1. Our main findings are that MESHGRAPHNETS are able to produce high-quality rollouts on all domains, outperforming particle-and grid-based baselines, while being significantly faster than the ground truth simulator, and generalizing to much larger and more complex settings at test time. |
| Researcher Affiliation | Industry | Tobias Pfaff , Meire Fortunato , Alvaro Sanchez-Gonzalez , Peter W. Battaglia Deepmind, London, UK {tpfaff,meirefortunato,alvarosg,peterbattaglia}@google.com |
| Pseudocode | No | The paper describes its methods in prose and through diagrams (Figure 1), but it does not include any explicitly labeled "Pseudocode" or "Algorithm" blocks or structured algorithmic steps. |
| Open Source Code | No | The paper includes links to videos of experiments but does not provide any links or explicit statements about the availability of the source code for the methodology described in the paper. |
| Open Datasets | No | The paper states that the training and test data were "produced by a different simulator for each domain" (Section 4) using simulators like Arc Sim [27], SU2 [13], and COMSOL [11]. While the simulators are cited, there is no explicit statement or link indicating that the *generated datasets themselves* are publicly available. |
| Dataset Splits | Yes | Each dataset consists of 1000 training, 100 validation and 100 test trajectories, each containing 250-600 time steps. |
| Hardware Specification | Yes | Models are trained on a single v100 GPU with the Adam optimizer for 10M training steps... In the table below, we show a detailed breakdown of per-step timings of our model run on CPU (8-core workstation) or a single v100 GPU. On our datasets, inference uses between 1-2.5GB of memory, including model variables and system overhead. |
| Software Dependencies | Yes | We used Arc Sim [27] for simulating the cloth datasets, SU2 [13] for compressible flows, and COMSOL [11] for incompressible flow and hyperelastic simulations. [11] Comsol. Comsol multiphysics R v. 5.4. http://comsol.com, 2020. |
| Experiment Setup | Yes | Models are trained on a single v100 GPU with the Adam optimizer for 10M training steps, using an exponential learning rate decay from 10 4 to 10 6 over 5M steps. The MLPs... layer and output size of 128...Increasing the number of graph net blocks (message passing steps) generally improves performance, but it incurs a higher computational cost. We found that a value of 15 provides a good efficiency/accuracy trade-off for all the systems considered. Second, the model performs best given the shortest possible history (h=1 to estimate x in cloth experiments, h=0 otherwise), with any extra history leading to overfitting. Dataset Batch size Noise scale FLAGSIMPLE 1 pos: 1e-3... |