Critical Points in Quantum Generative Models
Authors: Eric Ricardo Anschuetz
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We now test our analytic predictions using numerical simulations. |
| Researcher Affiliation | Academia | Eric R. Anschuetz MIT Center for Theoretical Physics Cambridge, MA 02139, USA eans@mit.edu |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide a specific link or explicit statement about the availability of its source code. |
| Open Datasets | No | The paper uses a Hamiltonian and generates problem instances for simulation, rather than using a publicly available or open dataset for which access information is provided. "HT,U is the 1D n site spinless Fermi Hubbard Hamiltonian (Negele & Orland, 1998) at half filling. Here, we take units such that the mean eigenvalue of the considered Hamiltonian (minus its smallest eigenvalue) is E = 1." |
| Dataset Splits | No | The paper does not explicitly provide training, validation, or test dataset splits. The numerical experiments describe generating instances for simulation: "To estimate the empirical distribution of local minima for the studied instances of the varitional quantum eigensolver (VQE) (Peruzzo et al., 2014), we repeated this procedure 52 times, using a new ansatz and uniformly random starting point for each training instance." |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run its experiments. |
| Software Dependencies | No | The paper mentions software used, 'Qiskit (Abraham et al., 2019)', but does not provide specific version numbers for it or any other ancillary software dependencies. |
| Experiment Setup | Yes | Our implementation of gradient descent used a learning rate of 0.05 and a momentum of 0.9, and halted when either the function value improved by no more than 10 5 or after 106 iterations, whichever came first. We initialized each instance at a uniformly random point in parameter space, with each parameter initialized within [ 2π, 2π]. |