Non-asymptotic Approximation Error Bounds of Parameterized Quantum Circuits
Authors: Zhan Yu, Qiuhao Chen, Yuling Jiao, Yinan Li, Xiliang Lu, Xin Wang, Jerry Yang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We further validate the approximation capability of PQCs through numerical experiments. |
| Researcher Affiliation | Academia | 1 School of Mathematics and Statistics, Wuhan University, Wuhan 430072, China 2 Centre for Quantum Technologies, National University of Singapore, 117543, Singapore 3 Hubei Key Laboratory of Computational Science, Wuhan 430072, China 4 Thrust of Artificial Intelligence, Information Hub, Hong Kong University of Science and Technology (Guangzhou), Guangzhou 511453, China |
| Pseudocode | No | The paper includes circuit diagrams (e.g., Figure 1) and detailed descriptions of algorithms and constructions in prose and mathematical notation but does not contain explicitly labeled "Pseudocode" or "Algorithm" blocks. |
| Open Source Code | Yes | We have provided the complete code in the supplementary materials that is necessary to reproduce the experimental results. |
| Open Datasets | No | We randomly sample 200 data points within the domain [0, 1] to create training and test datasets for D(x). |
| Dataset Splits | No | The paper mentions 'training and test datasets' but does not specify a separate validation split or explicit percentages/counts for training, validation, and test datasets. |
| Hardware Specification | Yes | Both learning processes are implemented on a Gold 6248 2.50 GHz Intel(R) Xeon(R) CPU. |
| Software Dependencies | No | The paper mentions using the 'Adam optimizer' but does not provide specific version numbers for software components or libraries. |
| Experiment Setup | Yes | Each parameter of the PQC is randomly initialized within the range [0, π]. We use the Adam optimizer [55] with a learning rate of 0.01 to minimize the Mean Squared Error (MSE) loss function during training. The training process was limited to a maximum of 300 iterations with a batch size of 100 data points. Early termination occurred if the MSE reached below 10−4. |