Transferable Boltzmann Generators

Authors: Leon Klein, Frank Noe

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The transferability of the proposed framework is evaluated on dipeptides, where we show that it generalizes efficiently to unseen systems. Furthermore, we demonstrate that our proposed architecture enhances the efficiency of Boltzmann Generators trained on single molecular systems.
Researcher Affiliation Collaboration Leon Klein Freie Universität Berlin leon.klein@fu-berlin.de Frank Noé Microsoft Research AI4Science Freie Universität Berlin Rice University franknoe@microsoft.com
Pseudocode No The paper describes methods and procedures in paragraph text, but no explicit pseudocode or algorithm blocks are provided.
Open Source Code Yes Our code is available here: https://osf.io/n8vz3/?view_only= 1052300a21bd43c08f700016728aa96e.
Open Datasets Yes The alanine dipeptide datasets were created in [23] (CC BY 4.0), we refer to them for detailed simulation details. ... The original dipeptide dataset as introduced in [11] (MIT License) is available here: https://huggingface.co/datasets/microsoft/timewarp. As this includes a lot of intermediate saved states and quantities, like energies, we create a smaller version with is available here: https://osf.io/n8vz3/?view_only=1052300a21bd43c08f700016728aa96e.
Dataset Splits Yes The simulation of the training peptides were run for 50ns, while the test set peptides were run for 1µs. ... The dipeptides are randomly selected, but it is ensured that all amino acids are represented at least once. However, we evaluate the best-performing model, namely TBG + full, for all 100 test peptides.
Hardware Specification Yes All training and inference was performed on single NVIDIA A100 GPUs with 80GB of RAM.
Software Dependencies Yes We primarily use the following code libraries: Py Torch (BSD-3) [71], bgflow (MIT license) [8, 39], torchdyn (Apache License 2.0) [72], and Network X (BSD-3) [73] for validating graph isomorphisms.
Experiment Setup Yes We report the model hyperparameters for the different model architectures as describes in Section 4.1 in Table 5. ... We report training hyperparameters for the different model architectures in Table 6. It should be noted that all TBG models are trained in an identical manner if the training set is identical. We use the ADAM optimizer for all experiments [74]. For the dipeptide training, each batch consists of three samples for each peptide.