Neural Pfaffians: Solving Many Many-Electron Schrödinger Equations

Authors: Nicholas Gao, Stephan Günnemann

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our empirical evaluation finds that a single neural Pfaffian calculates the ground state and ionization energies with chemical accuracy across various systems. On the Tiny Mol dataset, we outperform the gold-standard CCSD(T) CBS reference energies by 1.9 m Eh and reduce energy errors compared to previous generalized neural wave functions by up to an order of magnitude. 5 Experiments In the following, we evaluate Neur Pf on several atomic and molecular systems by comparing it to Globe (Gao & Günnemann, 2023a) and TAO (Scherbela et al., 2024).
Researcher Affiliation Academia Nicholas Gao, Stephan Günnemann {n.gao,s.guennemann}@tum.de Department of Computer Science & Munich Data Science Institute Technical University of Munich
Pseudocode No The paper describes methods and procedures in narrative text and mathematical formulas but does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks or figures.
Open Source Code Yes We provide the source code publicly on Git Hub 1. https://github.com/n-gao/neural-pfaffian
Open Datasets Yes We use the Tiny Mol dataset (Scherbela et al., 2024), consisting of a small and large dataset. Like Gao & Günnemann (2023a), the nitrogen structures are taken from Pfau et al. (2020) and the ethene structures from Scherbela et al. (2022).
Dataset Splits No The paper mentions using a 'training set' and 'test sets' for the Tiny Mol dataset and discusses optimization within the VMC framework, but it does not specify explicit dataset split percentages, absolute sample counts for validation, or reference predefined validation splits with citations for reproducibility within the paper's text.
Hardware Specification Yes Tab. 3 lists the compute times required for conducting our experiments measured in Nvidia A100 GPU hours. Depending on the experiment, we use between 1 and 4 GPUs per experiment via data parallelism. We typically allocated 32GB of system memory and 16 CPU cores per experiment.
Software Dependencies No The paper mentions software like JAX (Bradbury et al., 2018), folx (Gao et al., 2023), Spring optimizer (Goldshlager et al., 2024), prodigy optimizer (Mishchenko & Defazio, 2023), and lamb optimizer (You et al., 2020), but it does not provide specific version numbers for any of these key software components, which is required for full reproducibility.
Experiment Setup Yes Table 2: Hyperparameters used for the experiments. Most of them were taken directly from Gao & Günnemann (2023a).