Concentration of Data Encoding in Parameterized Quantum Circuits
Authors: Guangxi Li, Ruilin Ye, Xuanqiang Zhao, Xin Wang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To support our findings, we numerically verify these results on both synthetic and public data sets. Our results highlight the significance of quantum data encoding and may shed light on the future design of quantum encoding strategies. |
| Researcher Affiliation | Collaboration | Guangxi Li1,2, Ruilin Ye2,3, Xuanqiang Zhao2, Xin Wang2 1 University of Technology Sydney, NSW, Australia 2 Institute for Quantum Computing, Baidu Research, Beijing, China 3 Peking University, Beijing, China |
| Pseudocode | No | No pseudocode or algorithm blocks are explicitly labeled or presented in a structured format. |
| Open Source Code | No | Our experimental results could be easily reproduced. (No direct link to the paper's code or explicit statement of its release for the methodology.) |
| Open Datasets | Yes | The handwritten digit data set MNIST [61] consists of 70k images labeled from 0 to 9 , each of which contains 28 28 gray scale pixels valued in [0, 255]. |
| Dataset Splits | Yes | Next, we examine the performance of QNNs and POVMs by generating 20k data samples for training and 4k for testing under the encoding strategy in Fig. 4, where half of the data belong to class 0, and the others belong to class 1. |
| Hardware Specification | No | All our numerical experiments could be run on a personal laptop. (No specific hardware models or detailed specifications are provided.) |
| Software Dependencies | No | All the simulations and optimization loop are implemented via Paddle Quantum2 on the Paddle Paddle Deep Learning Platform [59]. (Specific versions for Paddle Quantum or Paddle Paddle are not provided.) |
| Experiment Setup | Yes | During the optimization, we adopt the Adam optimizer [60] with a batch size of 200 and a learning rate of 0.02. In the POVM setting, we directly employ semi-definite programming [56] to obtain the maximum success probability Psucc on the training data samples. ... The settings of QNN are almost the same as those used in the synthetic case, except for a new learning rate of 0.05. |