Rigid Body Flows for Sampling Molecular Crystal Structures
Authors: Jonas Köhler, Michele Invernizzi, Pim De Haan, Frank Noe
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the method by training Boltzmann generators for two molecular examples, namely the multi-modal density of a tetrahedral system in an external field and the ice XI phase in the TIP4P water model. Our flows can be combined with flows operating on the internal degrees of freedom of molecules, and constitute an important step towards the modeling of distributions of many interacting molecules. |
| Researcher Affiliation | Collaboration | 1Microsoft Research AI4Science 2Freie Universit at Berlin, Department of Mathematics and Computer Science 3Qualcomm AI Research, an initiative from Qualcomm Technologies, Inc. 4University of Amsterdam 5Freie Universit at Berlin, Department of Physics 6Rice University, Department of Chemistry. |
| Pseudocode | No | The paper includes architectural diagrams (e.g., Figure 1, 4, 5) but no explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | All the code used to obtain the results is available at https: //github.com/noegroup/rigid-flows. |
| Open Datasets | No | The paper describes generating its own dataset through simulations using Open MM and Gen Ice2 software, rather than using a pre-existing publicly available dataset. No specific link or citation to a public dataset is provided. |
| Dataset Splits | No | The paper mentions splitting the MD run into a part for training and another for LFEP evaluation, but it does not specify a separate validation set split or its proportion. |
| Hardware Specification | Yes | Training on a Ge Force GTX 1080 Ti took about 10 minutes for each of the small systems and 30 minutes for the larger one. |
| Software Dependencies | No | The paper mentions several software components like Open MM, Gen Ice2, jaxopt, ADAM optimizer, and pymbar, but it does not provide specific version numbers for these dependencies. |
| Experiment Setup | Yes | We use a Langevin integrator at 100K. We chose a time step of 1ps and only keep each 500th frame as a sample to ensure proper mixing. We optimize this objective for 50, 000 steps for each candidate flow. We used the ADAM optimizer (Kingma & Ba, 2015) with a batch size of 32 and a learning rate of 0.0005. We train each model using ADAM (Kingma & Ba, 2015) for 1000 iterations per epoch over 10 epochs. We use a batch size of 32 and a cosine scheduler for the learning rate, which annealed the learning rate from 1e-3 in the first epoch to 1e-5 in the final epoch. |