Deep Generative Markov State Models

Authors: Hao Wu, Andreas Mardt, Luca Pasquali, Frank Noe

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The Deep Gen MSM is demonstrated to provide accurate estimates of the long-time kinetics and generate valid distributions for molecular dynamics (MD) benchmark systems.
Researcher Affiliation Academia 1Dept. of Mathematics and Computer Science, Freie Universität Berlin, 14195 Berlin, Germany 2School of Mathematical Sciences, Tongji University, Shanghai, 200092, P.R. China
Pseudocode No The paper describes algorithms and training procedures but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes For the full code and all details about the neural network architecture, hyper-parameters and training algorithm, please refer to https://github.com/markovmodel/deep_gen_msm.
Open Datasets Yes We first apply our framework to the time-discretized diffusion process xt+ t = t V (xt) + 2 t ηt with t = 0.01 in the Prinz potential V (xt) introduced in [22] (Fig. 2a). For this system we know exact results for benchmarking... We simulate trajectories of lengths 250, 000 and 125, 000 time steps for training and validation, respectively. ... One 250 ns trajectory with a storage interval of 1 ps is used and split 80%/20% for training and validation see [15] for details of the simulation setup.
Dataset Splits Yes We simulate trajectories of lengths 250, 000 and 125, 000 time steps for training and validation, respectively. ... One 250 ns trajectory with a storage interval of 1 ps is used and split 80%/20% for training and validation see [15] for details of the simulation setup.
Hardware Specification No The paper does not provide specific details about the hardware used, such as GPU/CPU models or memory amounts.
Software Dependencies No The networks were implemented using Py Torch [20] and Tensorflow [6]. Specific version numbers for PyTorch and TensorFlow are not provided.
Experiment Setup No The paper states: 'The details of the architecture and the training procedure can be found in the Supplementary Information.' and 'Neural network and training details are again found at the git repository and in the Supplementary Information.' This indicates that specific experimental setup details are not present in the main text.