Scalable Graph Networks for Particle Simulations
Authors: Karolis Martinkus, Aurelien Lucchi, Nathanaël Perraudin8912-8920
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate how the phase space position accuracy and energy conservation depend on the number of simulated particles. Our approach retains high accuracy and efficiency even on large-scale gravitational N-body simulations which are impossible to run on a single machine if a fully-connected graph is used. Similar results are also observed when simulating Coulomb interactions. |
| Researcher Affiliation | Academia | Karolis Martinkus,1 Aurelien Lucchi, 1 Nathana el Perraudin 2 1 ETH Z urich 2 Swiss Data Science Center martinkus@ethz.ch, aurelien.lucchi@inf.ethz.ch, nathanael.perraudin@sdsc.ethz.ch |
| Pseudocode | No | The paper describes the process and transformations but does not provide any formal pseudocode or algorithm blocks (e.g., a figure or section labeled 'Algorithm' or 'Pseudocode'). |
| Open Source Code | No | The paper does not include any explicit statement about making the source code available for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | No | The paper states: 'We built our own N-body simulator.' and 'We simulate 1000 training, 200 validation and 200 test trajectories.' It describes how the data was generated and initialized but does not provide concrete access information (link, DOI, or formal citation for public availability) for this custom dataset. |
| Dataset Splits | Yes | We simulate 1000 training, 200 validation and 200 test trajectories. |
| Hardware Specification | Yes | The experiments were performed on a machine with an Intel Xeon E5-2690 v3 CPU (2.60GHz, 12 cores), 64GB RAM and NVIDIA Tesla P100 GPU (16GB RAM). |
| Software Dependencies | No | The paper mentions using 'ADAM (Kingma and Ba 2014)' for optimization, but it does not specify any software or library names with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | Models are trained for 500 thousand steps using a batch size of 100 unless stated otherwise. We exponentially decay the learning rate every 200 thousand steps by 0.1. The initial learning rate for all of the models was set to 0.0003. The models are optimised using ADAM (Kingma and Ba 2014), and the mean square error (MSE) which is computed between the predicted and true phase space coordinates after one time step. The base time step is set to t = 0.01. Gravitational and Coulomb constants are respectively set to G = 2 and k = 2. |