Differentiable Analog Quantum Computing for Optimization and Control
Authors: Jiaqi Leng, Yuxiang Peng, Yi-Ling Qiao, Ming Lin, Xiaodi Wu
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Applying our framework to quantum optimization and control, we observe a significant advantage of differentiable analog quantum computing against SOTAs based on parameterized digital quantum circuits by orders of magnitude. ... Applications of our framework on quantum optimization (Sec. 4) and control (Sec. 5), with demonstrated advantages by orders of magnitude against parameterized quantum circuits. |
| Researcher Affiliation | Academia | 1Joint Center for Quantum Information and Computer Science, University of Maryland 2Department of Computer Science, University of Maryland 3Department of Mathematics, University of Maryland 4Center for Machine Learning, University of Maryland |
| Pseudocode | Yes | Algorithm 1 Estimating gradients on an AQAM |
| Open Source Code | Yes | Our code is available here: https://github.com/YilingQiao/diffquantum |
| Open Datasets | No | The paper describes problems like 'H2 molecule' and 'Max Cut problem' that are solved numerically, but these are not referred to as publicly available datasets with specific access information (e.g., links, DOIs, formal citations to data repositories). |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for training, validation, or testing. |
| Hardware Specification | No | The paper discusses 'IBM transmon system' and 'IBM's machines' in the context of the quantum computing model (AQAM design), but it does not specify the classical hardware (e.g., CPU, GPU models, memory) used to run the simulations or experiments described. |
| Software Dependencies | No | The paper mentions 'IBM Qiskit' in Section 5.2, but it does not provide a specific version number. No other software dependencies are listed with version numbers. |
| Experiment Setup | Yes | We match it in our experiments, setting T = 720dt. Additionally, we test our approach with only half the duration, T = 360dt. ... With the estimation of gradients, various optimizers designed for classical stochastic gradient descent are suitable to optimize the objective function. For example, Adam [35] is used in our experiments. ... The integration mini-batch updates parameters according to the estimation of derivatives on the sampled time. The observation mini-batch repeats experiments to generate more precise measurement results. The scheme is displayed in Algorithm 1. ... detailed hyper-parameter settings in Appendix D.1.1. |