From Biased to Unbiased Dynamics: An Infinitesimal Generator Approach

Authors: Timothée Devergne, Vladimir Kostic, Michele Parrinello, Massimiliano Pontil

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we test the method described above on well-established [14, 9, 32, 36] molecular dynamics benchmarks, featuring biased simulations of increasing complexity. We first start by showing the efficiency of our method on a simple one dimensional double well potential.
Researcher Affiliation Academia Timothée Devergne CSML & ATSIM, Istituto Italiano di Tecnologia timothee.devergne@iit.it Vladimir R. Kostic CSML, Istituto Italiano di Tecnologia University of Novi Sad vladimir.kostic@iit.it Michele Parrinello ATSIM, Istituto Italiano di Tecnologia michele.parrinello@iit.it Massimiliano Pontil CSML, Istituto Italiano di Tecnologia AI Centre, University College London massimiliano.pontil@iit.it
Pseudocode Yes Algorithm 1: From biased to unbiased dynamics via infinitesimal generator
Open Source Code Yes The codes used to train the models can be found in the following repository: https://github.com/Devergne Timothee/Gen Learn
Open Datasets Yes The data points are represented in the plane of the distance between the nitrogen atom of the residue 3: ASP (ASP3N) and the oxygen atom of the residue 7: Gly (Gly7O) and the distance between ASP3N and the oxygen atom of residue 8: THR (THR8) which allow visualizing the folded and unfolded states.
Dataset Splits Yes In all the experiments, the datasets were randomly split into a training and a validation dataset. The proportion were set to 80% for training and 20% for validation.
Hardware Specification Yes All the experiments were performed on a workstation with a AMD Ryzen threadripper pro 3975wx 32-cores processor and an NVIDIA Quadro RTX 4000 GPU.
Software Dependencies Yes For all the experiments we used pytorch 1.13, and the optimizations of the models were performed using the ADAM optimizer. The version of python used is 3.9.18. All the simulations are run with GROMACS 2022.3 [2] and patched with plumed 2.10 [45]
Experiment Setup Yes We use a learning rate of 5.10 3, the architecture of the neural network used is a multilayer perceptron with layers of size 2 (inputs), 20, 20 and 1. The parameter η was chosen to be 0.05.