Timewarp: Transferable Acceleration of Molecular Dynamics by Learning Time-Coarsened Dynamics

Authors: Leon Klein, Andrew Foong, Tor Fjelde, Bruno Mlodozeniec, Marc Brockschmidt, Sebastian Nowozin, Frank Noe, Ryota Tomioka

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate Timewarp on small peptide systems. To compare with MD, we focus on the slowest transitions between metastable states, as these are the most difficult to traverse.
Researcher Affiliation Collaboration Leon Klein Freie Universität Berlin leon.klein@fu-berlin.de Andrew Y. K. Foong Microsoft Research AI4Science andrewfoong@microsoft.com Tor Erlend Fjelde University of Cambridge tef30@cam.ac.uk Bruno Mlodozeniec University of Cambridge bkm28@cam.ac.uk Marc Brockschmidt Sebastian Nowozin Frank Noé Microsoft Research AI4Science Freie Universität Berlin Rice University franknoe@microsoft.com Ryota Tomioka Microsoft Research AI4Science ryoto@microsoft.com
Pseudocode Yes Pseudocode for the MCMC algorithm is given in Algorithm 1 in Appendix C. Pseudocode is given in Algorithm 2 in Appendix D.
Open Source Code Yes The code is available here: https://github.com/microsoft/timewarp.
Open Datasets No The datasets are available upon request3. 3Please contact andrewfoong@microsoft.com for dataset access.
Dataset Splits No For 2AA and 4AA, we train on a randomly selected trainset of short trajectories (50ns = 108 steps), and evaluate on unseen test peptides.
Hardware Specification Yes The training was performed on 4 NVIDIA A-100 GPUs for the 2AA and 4AA datasets and on a single NVIDIA A-100 GPU for the AD dataset. Inference with the model as well as all MD simulations were conducted on single NVIDIA V-100 GPUs for AD and 2AA, and on single NVIDIA A-100 GPUs for 4AA.
Software Dependencies No The paper mentions using 'Open MM library' and 'Deep Speed library' but does not specify their version numbers, which are required for a reproducible description of ancillary software.
Experiment Setup Yes For all MD simulations we use the parameters shown in Table 1. ... We use a weighted sum of the losses with weights detailed in Table 5. We use the Fused Lamb optimizer and the Deep Speed library [34] for all experiments. The batch size as well as the training times are reported in Table 6. All simulations are started with a learning rate of 5 10 4, the learning rate is then consecutively decreased by a factor of 2 upon hitting training loss plateaus.