QuantumDARTS: Differentiable Quantum Architecture Search for Variational Quantum Algorithms
Authors: Wenjie Wu, Ge Yan, Xudong Lu, Kaisen Pan, Junchi Yan
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct intensive experiments on unweighted Max-Cut, ground state energy estimation, and image classification. The superior performance shows the efficiency and capability of macro search, which requires little prior knowledge. Moreover, the experiments on micro search show the potential of our algorithm for large-scale QAS problems. |
| Researcher Affiliation | Academia | 1Mo E Key Lab of AI, Shanghai Jiao Tong University, Shanghai, China. Correspondence to: Junchi Yan <yanjunchi@sjtu.edu.cn>. |
| Pseudocode | Yes | Algorithm 1 Macro quantum architecture search |
| Open Source Code | No | The paper states 'The source code is written using Py Torch v1.12.1.' but does not explicitly provide a link to their own implementation or declare it as open-source for the described methodology. |
| Open Datasets | Yes | We also test our algorithm on image classification on MNIST (Le Cun et al., 1998) in both noiseless and noisy environments. |
| Dataset Splits | No | The training set and test set contain 12,665 and 2,115 samples, respectively. |
| Hardware Specification | Yes | Experiments are performed on a commodity workstation with 4 CPUs with 224 cores Intel(R) Xeon(R) Platinum 8276 CPU @ 2.20GHz, and a GPU (NVIDIA A100 PCIe). |
| Software Dependencies | Yes | The source code is written using Py Torch v1.12.1. |
| Experiment Setup | Yes | We use the Adam optimizer and a cosine annealing schedule (Loshchilov & Hutter, 2016) to train our model. We train CNN, QCNN and our model on the whole training set by 5 epochs and evaluate them on the test set. The hidden dimension is set as 8 for angle encoding and 16 for dense encoding. We train the autoencoder for 10 epochs on the training set with the mean squared error (MSE) loss. The Adam optimizer is used for optimization. |