Quadratic Quantum Variational Monte Carlo

Authors: Baiyu Su, Qiang Liu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive experiments showcase Q2VMC s superior performance, achieving faster convergence and lower ground state energies in wavefunction optimization across various molecular systems, without additional computational cost. This study not only advances the field of computational quantum chemistry but also highlights the important role of discretized evolution in variational quantum algorithms, offering a scalable and robust framework for future quantum research.
Researcher Affiliation Academia Baiyu Su University of Texas at Austin baiyusu@utexas.edu Qiang Liu University of Texas at Austin lqiang@cs.utexas.edu
Pseudocode Yes Algorithm 1 QVMC vs Q2VMC
Open Source Code No The paper mentions adapting architectures from public implementations of Fermi Net [43] and Lap Net [45] and using JAX [44] and KFAC-JAX [46], but it does not state that the authors' own code for the work described is openly released or provide a link to it.
Open Datasets No The paper mentions testing on 'six different molecules' and refers to Psiformer [8] and Lap Net [9] studies, but it does not provide concrete access information (link, DOI, formal citation for the dataset itself) for a publicly available or open dataset of these molecules.
Dataset Splits No The paper states 'we optimize the models to 200,000 training iterations' and 'an additional evaluation was conducted over 20,000 steps', but it does not specify explicit training/validation/test dataset splits or mention a distinct validation set.
Hardware Specification Yes For the Lap Net experiments, training was conducted on four Nvidia Ge Force 3090 GPUs, utilizing standard single precision calculations and double-precision for matrix multiplications, with training durations ranging from 5 to 90 clock hours depending on the size of the molecule. Similarly, Psiformer experiments were performed in single precision on four Nvidia V100 GPUs, with each run varying from 8 to 140 clock hours.
Software Dependencies No All models were implemented using the JAX framework [44]... The architectures were adapted from public implementations of Fermi Net [43] and Lap Net [45]... Natural gradient updates were based on KFAC-JAX [46]... While software names are mentioned, specific version numbers for JAX, Fermi Net, Lap Net, or KFAC-JAX are not provided in the paper's text.
Experiment Setup Yes The architectural hyperparameters are delineated in Table 4 in Appendix. To demonstrate the easy integration and robustness of our method, we adhered to all the original training hyperparameters from their publications (detailed in Appendix Table 5).