QuACK: Accelerating Gradient-Based Quantum Optimization with Koopman Operator Learning

Authors: Di Luo, Jiayu Shen, Rumen Dangovski, Marin Soljacic

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate Qu ACK s remarkable ability to accelerate gradient-based optimization across a range of applications in quantum optimization and machine learning. In fact, our empirical studies, spanning quantum chemistry, quantum condensed matter, quantum machine learning, and noisy environments, have shown accelerations of more than 200x speedup in the overparameterized regime, 10x speedup in the smooth regime, and 3x speedup in the non-smooth regime.
Researcher Affiliation Academia Di Luo Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA Department of Physics, Harvard University, Cambridge, MA 02138, USA The NSF AI Institute for Artificial Intelligence and Fundamental Interactions diluo@mit.edu Jiayu Shen Department of Physics, University of Illinois, Urbana-Champaign Urbana, IL 61801, USA Illinois Quantum Information Science and Technology Center Illinois Center for Advanced Studies of the Universe jiayus3@illinois.edu Rumen Dangovski Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology Cambridge, MA 02139, USA rumenrd@mit.edu Marin Soljaˇci c Department of Physics, Massachusetts Institute of Technology Cambridge, MA 02139, USA soljacic@mit.edu
Pseudocode Yes Algorithm 1 Qu ACK
Open Source Code Yes Our code is available at https://github.com/qkoopman/QuACK.
Open Datasets Yes In addtion to VQE, we consider the task of binary classification on a filtered MNIST dataset with samples labeled by digits 1 and 9 . We use an interleaved block-encoding scheme for QML, which is shown to have generalization advantage [29, 13, 40, 63] and recently realized in experiment [62].
Dataset Splits No The paper mentions using 500 training examples and 500 test examples from a filtered MNIST dataset, but does not specify a validation split or percentages for these splits in a way that allows reproduction of the data partitioning.
Hardware Specification Yes All of our experiments for classically simulating quantum computation are performed on a single CPU... In this paper we used ibmq_lima, which is one of the IBM Quantum Falcon Processors.
Software Dependencies No Our experiments are run with Qiskit [2], Pytorch [57], Yao [47] (in Julia [7]), and Pennylane [6]. ... We perform simulations of VQE using Qiskit [2], a python framework for quantum computation, and Yao [47], a framework for quantum algorithms in Julia [7]. Our neural network code is based on Qiskit and Pytorch [57]. Our implementation of quantum machine learning is based on Julia Yao. We use the quantum chemistry module from Pennylane [6] to obtain the Hamiltonian of the molecule.
Experiment Setup Yes In Sec. 6.2 Quantum Natural Gradient, we use the circular-entanglement Real Amplitudes ansatz and reps=1 (2 layers, 2N parameters) with Tb,t = 800, quantum natural gradient optimizer, learning rate 0.001. For the Qu ACK hyperparameters, we choose nsim = 4 and n DMD = 100 with niter = 8. The random sampling of initialization of θ is from a uniform distribution in [0, 1)nparams. ... We use 500 training examples and 500 test examples. During training, we use the stochastic gradient descent optimizer with the batch size 50 and learning rate 0.05. The full QML training has Tb,t = 400 iterations. We choose nsim = 10, n DMD = 20, and n SW = 6 for SW-DMD and MLP-SW-DMD.