Exponentially Many Local Minima in Quantum Neural Networks
Authors: Xuchen You, Xiaodi Wu
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we empirically confirm that our constructions can indeed be hard instances in practice with typical gradient-based optimizers, which demonstrates the practical value of our findings. |
| Researcher Affiliation | Academia | 1Joint Center for Quantum Information and Computer Science, University of Maryland 2Department of Computer Science and Institute for Advanced Computer Studies, University of Maryland. Correspondence to: X.You <xyou@umd.edu>, X.Wu <xwu@cs.umd.edu>. |
| Pseudocode | No | Insufficient information. The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | Insufficient information. The paper does not provide any statement or link indicating the availability of open-source code for the described methodology. |
| Open Datasets | No | Insufficient information. The paper describes the construction of datasets for its theoretical analysis and the generation of data for experiments, but does not provide concrete access information (link, DOI, formal citation) for a publicly available or open dataset. |
| Dataset Splits | No | Insufficient information. The paper does not provide specific details on training, validation, and test dataset splits (e.g., percentages, sample counts, or references to predefined splits). |
| Hardware Specification | Yes | The experiments are run on Intel Core i7-7700HQ Processor (2.80GHz) with 16G memory. |
| Software Dependencies | No | Insufficient information. The paper mentions 'Pytorch (Paszke et al., 2019)' but does not provide a specific version number for the software used. |
| Experiment Setup | No | Insufficient information. The paper mentions optimizer names (Adam, RMSProp, L-BFGS) and parameter initialization strategy ('uniformly sample the initial parameters from [0, 2π)p'), but critical hyperparameters like learning rate, batch size, or number of epochs are not provided in the main text, with 'all the training details' deferred to supplementary material. |