Sampling-free Inference for Ab-Initio Potential Energy Surface Networks
Authors: Nicholas Gao, Stephan Günnemann
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our experimental evaluation, Pla Net accelerates inference by 7 orders of magnitude for larger molecules like ethanol while preserving accuracy. Compared to previous energy surface networks, PESNet++ reduces energy errors by up to 74 %. |
| Researcher Affiliation | Academia | Nicholas Gao, Stephan Günnemann Department of Computer Science & Munich Data Science Institute Technical University of Munich, Germany {n.gao,s.guennemann}@tum.de |
| Pseudocode | Yes | Algorithm 1 t + 1th optimization step |
| Open Source Code | Yes | Our source is publicly available 1 licensed under the Hippocratic license (Ehmke, 2019).1https://www.cs.cit.tum.de/daml/pesnet/ |
| Open Datasets | Yes | H4: geometries taken from Pfau et al. (2020). Lithium dimer Li2: 32 evenly distributed distances between 3.5 a0 and 14 a0. Hydrogen chain H10: geometries taken from Motta et al. (2017). Nitrogen dimer N2: geometries taken from Pfau et al. (2020). H2-HF: 64 regular grid points of the N-dimensional energy surface. The boundaries are chosen as: r1, r2 [1.2 a0, 1.8 a0], R [3.0 a0, 8.0 a0], θ1, θ2, φ [0 , 180 ]. Ethanol C2H5OH: 64 evenly distributed torsion angles between 0 and 360 . |
| Dataset Splits | No | The paper refers to 'training' but does not explicitly provide details about training/validation/test dataset splits (e.g., percentages, sample counts, or specific split files) needed for reproduction. |
| Hardware Specification | Yes | We ran all experiments on a machine with 16 AMD EPYC 7742 cores and a single Nvidia A100 GPU. |
| Software Dependencies | No | We implemented Pla Net and PESNet++ on top of the official JAX (Bradbury et al., 2018) implementation of PESNet (Gao & Günnemann, 2022)... |
| Experiment Setup | Yes | Table 4: Default hyperparameters. (includes learning rate, batch size, iterations, WFModel details, Dime Net++ details, Pla Net Optimization hyperparameters) |