Neural Network Reparametrization for Accelerated Optimization in Molecular Simulations
Authors: Nima Dehmamy, Csaba Both, Jeet Mohapatra, Subhro Das, Tommi Jaakkola
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We apply our method to protein folding using classical MD forces. Settings: We use gradient descent to minimize L (X). All experiments (both CG and baseline) use the Adam optimizer... |
| Researcher Affiliation | Collaboration | Nima Dehmamy IBM Research Nima.Dehmamy@ibm.com Csaba Both Northeastern University both.c@northeastern.edu Jeet Mohapatra MIT CSAIL jeetmo@mit.edu Subhro Das IBM Research subhro.das@ibm.com Tommi Jaakkola MIT CSAIL tommi@csail.mit.edu |
| Pseudocode | Yes | Figure 1: Overview of the neural reparametrization method. Top: Architectures used for reparametrization... Left: Flowchart showing the key steps of the method. Right: Detailed algorithm for implementation. |
| Open Source Code | Yes | The code can be found at https://github.com/nimadehmamy/coarse_graining_reparam |
| Open Datasets | Yes | We test our model on several small proteins including Chignolin (5AWL), Trp-Cage (2JOF), Cyclotide (2MGO), and Enkephalin (1PLW)... Figure 4: Protein folding simulations ... RMSD value, which measures how close the final layout is to the PDB layout. |
| Dataset Splits | No | The paper mentions 'early stopping' but does not provide specific dataset splits (e.g., percentages, sample counts, or formal cross-validation setup) for training, validation, and testing. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, processor types, or memory amounts used for running experiments. The NeurIPS checklist for this paper also has 'TODO' for this question. |
| Software Dependencies | No | The paper mentions 'rdkit Landrum et al. (2020)' and 'Open MM Eastman et al. (2017)', but it does not specify exact version numbers for these or other software components, which are required for a reproducible description. |
| Experiment Setup | Yes | Settings: We use gradient descent to minimize L (X). All experiments (both CG and baseline) use the Adam optimizer with a learning rate 10^-2 and early stopping with |δL | = 10^-6 tolerance and 5 steps patience. We ran each experiment four times. |