Predicting Ground State Properties: Constant Sample Complexity and Deep Learning Algorithms

Authors: Marc Wanner, Laura Lewis, Chiranjib Bhattacharyya, Devdatt Dubhashi, Alexandru Gheorghiu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also perform numerical experiments on systems of up to 45 qubits that confirm the improved scaling of our approach compared to [1, 2].
Researcher Affiliation Academia Marc Wanner Computer Science and Engineering Chalmers University of Technology and University of Gothenburg wanner@chalmers.se Laura Lewis Applied Mathematics and Theoretical Physics University of Cambridge Cambridge, United Kingdom llewis@alumni.caltech.edu Chiranjib Bhattacharyya Computer Science and Automation Indian Institute of Science Bangalore, India chiru@iisc.ac.in Devdatt Dubhashi Computer Science and Engineering Chalmers University of Technology and University of Gothenburg dubhashi@chalmers.se Alexandru Gheorghiu Computer Science and Engineering Chalmers University of Technology and University of Gothenburg aleghe@chalmers.se
Pseudocode Yes Algorithm 1: Deep learning-based prediction of ground state properties
Open Source Code Yes The code can be found at https://github.com/marcwannerchalmers/learning_ground_states.git.
Open Datasets No We obtained the data by approximating the ground state using the density-matrix renormalization group (DMRG) [96] based on matrix-product-states (MPS) [97], as has been done in [1, 2].
Dataset Splits No The paper mentions generating training data and testing performance, but does not explicitly detail training/validation/test splits by percentages or absolute counts for reproducibility.
Hardware Specification Yes The simulations were performed on Nvidia T4 and A40 graphical processing units (GPUs). The former were used for lattice sizes from 4x5 up to 7x5 while the latter were used for lattice sizes 8x5 and 9x5. ... Our deep learning model was also trained on Nvidia T4 and A40 GPUs.
Software Dependencies No The paper mentions using Adam W optimization algorithm [83] but does not provide specific version numbers for software dependencies.
Experiment Setup Yes For each of the local models fθP P , we use fully connected deep neural networks with five hidden layers of width 200. We train the model with the Adam W optimization algorithm [83]. We measure the training error and prediction error via the root-mean-square error (RMSE). ... For each data point, we trained a combined model for 500 epochs.