BroGNet: Momentum-Conserving Graph Neural Stochastic Differential Equation for Learning Brownian Dynamics

Authors: Suresh Bishnoi, Jayadeva Jayadeva, Sayan Ranu, N M Anoop Krishnan

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we benchmark the ability of BROGNET to learn Brownian dynamics directly from the trajectory and establish: Accuracy: BROGNET accurately models Brownian dynamics and outperforms baseline models. Zero-shot generalizability: The inductive architecture of BROGNET allows it to accurately generalize to systems of unseen sizes and temperatures.
Researcher Affiliation Academia Suresh Bishnoi, Jayadeva, Sayan Ranu, N. M. Anoop Krishnan Indian Institute of Technology Delhi, Hauz Khas, New Delhi, India 110016 {srz208500,jayadeva,sayanranu,krishnan}@iitd.ac.in
Pseudocode No The paper describes the architecture (Fig. 1) and mathematical formulations of BROGNET but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks formatted as such.
Open Source Code Yes The codebase and all datasets used in this work can be accessed from the repository at https://github.com/M3RG-IITD/Bro GNet.
Open Datasets Yes The codebase and all datasets used in this work can be accessed from the repository at https://github.com/M3RG-IITD/Bro GNet. All the datasets are generated using the known deterministic forces of the systems, along with the stochastic, as described in Section C and Eq.2. For each system, we create the training data by performing forward simulations with 100 random initial conditions.
Dataset Splits Yes The training dataset is divided in 80:20 ratio randomly, where the 80% is used for training and 20% is used as the validation set for hyperparametric optimization.
Hardware Specification Yes Hardware: Memory: 16Gi B System memory, Processor: Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz.
Software Dependencies Yes Software packages: numpy-1.20.3, jax-0.2.24, jax-md-0.1.20, jaxlib-0.1.73, jraph-0.0.1.dev0
Experiment Setup Yes The detailed procedures followed for the training of the models and the hyperparameters employed for each of the models, identified based on good practices, are provided in this section. ... Parameter Value Hidden layer neurons (MLP) 16 Number of hidden layers (MLP) 2 Activation function squareplus Optimizer ADAM Learning rate 1.0e-3 Batch size 20