Classically Approximating Variational Quantum Machine Learning with Random Fourier Features

Authors: Jonas Landman, Slimane Thabet, Constantin Dalyac, Hela Mhiri, Elham Kashefi

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this Section, we aim to assess the accuracy and efficiency of our classical methods to approximate VQCs in practice. Each VQC was analyzed using ideal simulators of quantum computers, on a classical computer, without taking the noise into account. Important complementary experiments are provided in Appendix G. In particular, we show scaling simulations in Appendix G.5.
Researcher Affiliation Collaboration Jonas Landman University of Edinburgh QC Ware; Slimane Thabet LIP6, Sorbonne Universit e PASQAL SAS; Constantin Dalyac LIP6, Sorbonne Universit e PASQAL SAS; Hela Mhiri LIP6, Sorbonne Universit e ENSTA Paris; Elham Kashefi University of Edinburgh LIP6, Sorbonne Universit e
Pseudocode Yes Algorithm 1 RFF with Distinct Sampling; Algorithm 2 RFF with Tree Sampling; Algorithm 3 RFF with Grid Sampling
Open Source Code Yes CODE AVAILABILITY: All the code that was used in this project is available following the anonymous link https://osf.io/by5dk/?view_only=5688cba7b13d44479f76e13e01d28d75
Open Datasets Yes We choose the fashion-MNIST dataset (Xiao et al. (2017)), where we consider a binary image classification task. We also use the California Housing dataset for a regression task.
Dataset Splits No The paper mentions 'Ntrain = 9600 and Ntest = 2400' for Fashion-MNIST and 'Ntrain = 5000 and Ntest = 1000' for California Housing, but does not provide specific details on a separate validation set split, only mentioning early stopping which typically uses one.
Hardware Specification No The paper states that experiments were run 'on a classical computer' but does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts.
Software Dependencies No The paper mentions software like 'Py Torch' and 'Scikit-learn', and optimizers like 'Adam', but it does not specify version numbers for these components, which are necessary for reproducible software dependencies.
Experiment Setup Yes The final VQC predictions are obtained after 60 epochs using Adam optimizer with learning rate = 0.01 . For Tree sampling RFF training, trained for 2000 epochs with early stopping using Adam optimizer with learning rate = 0.05. The final VQC predictions are obtained after 100 epochs using Adam optimizer with learning rate = 0.01.