Power and limitations of single-qubit native quantum neural networks
Authors: Zhan Yu, Hongshun Yao, Mujin Li, Xin Wang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We further demonstrate the expressivity and limitations of single-qubit native QNNs via numerical experiments. In order to better illustrate the expressive power of single-qubit native QNNs, we supplement the theoretical results with numerical experiments. |
| Researcher Affiliation | Industry | Institute for Quantum Computing, Baidu Research, Beijing 100193, China |
| Pseudocode | No | The paper describes methods using mathematical equations and circuit diagrams, but does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper includes a self-reported 'Yes' to providing code for reproduction in the ethics review guidelines, but no direct link to a repository or explicit statement about code availability in supplementary material is present in the main text or appendices. |
| Open Datasets | No | The paper states 'The dataset consists of 300 data points uniformly sampled from the interval [0, π]' and 'The training set consists of 400 data points sampled from interval [ π, π]2', indicating custom-generated data for which no public access information (link, DOI, or formal citation to a public repository) is provided. |
| Dataset Splits | No | For univariate function approximation, the paper states 'The dataset consists of 300 data points uniformly sampled from the interval [0, π], from which 200 are selected for the training set and 100 for the test set.' However, for multivariate function approximation, it only mentions 'The training set consists of 400 data points sampled from interval [ π, π]2', without specifying test or validation splits. No explicit validation split is mentioned for either case. |
| Hardware Specification | Yes | All simulations are carried out with the Paddle Quantum toolkit on the Paddle Paddle Deep Learning Platform, using a desktop with an 8-core i7 CPU and 32GB RAM. |
| Software Dependencies | No | The paper mentions 'Paddle Quantum toolkit' and 'Paddle Paddle Deep Learning Platform' but does not specify their version numbers. |
| Experiment Setup | Yes | The parameters of trainable gates are initialized from the uniform distribution on [0, 2π]. We adopt a variational quantum algorithm, where a gradient-based optimizer is used to search and update parameters in the QNN. The mean squared error (MSE) serves as the loss function. Here the Adam optimizer is used with a learning rate of 0.1. We set the training iterations to be 100 with a batch size of 20 for all experiments. |