Toward Physically Realizable Quantum Neural Networks

Authors: Mohsen Heidari, Ananth Grama, Wojciech Szpankowski6902-6909

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally assess the performance of our QNNs trained using randomized QSGD in Algorithm 1. In our experiments, we focus on binary classification of quantum states with {-1, 1} as the label set and with conventional 0-1 loss to measure predictor’s accuracy. Numerical experiments are simulated on a classical computer.
Researcher Affiliation Academia Mohsen Heidari, Ananth Grama, Wojciech Szpankowski Department of Computer Science, Purdue University, West Lafayette, IN, USA
Pseudocode Yes Algorithm 1: Randomized QSGD Input: Training data {(ρt, yt)}n t=1, Learning rate ηt Output: Finial parameters of QNN: a 1 Initialize the parameter a with each component selected with uniform distribution over [−1, 1]. 2 for t = 1 to n do 3 Randomly select a QP in the network. Let it be the jth QP of layer l. 4 Pass ρt through the previous layers and let ρl−1 t be the output of the layer (l − 1). 5 Randomly select a component al,j,s of al,j. 6 Apply Vs as in (5) on ρl−1 t |+ +| and measure the resulting state with N as in (6). 7 With (ˆy, b) being the outcome, compute the measured derivative as zt = 2(−1)bℓ(y, ˆy). 8 Update the sth component of al,j as al,j,s ← al,j,s − ηtzt.
Open Source Code Yes 2The source codes and implementations are available at https://github.com/mohsenhdkh/Randomized QSGD.
Open Datasets Yes We use a synthetic dataset from recent efforts (Mohseni, Steinberg, and Bergou 2004; Chen et al. 2020; Patterson et al. 2021; Li, Song, and Wang 2021) focused on quantum state discrimination.
Dataset Splits No The paper mentions training data and then a test set, but does not explicitly provide a validation set or specific split percentages for validation.
Hardware Specification No The paper only states 'Numerical experiments are simulated on a classical computer.' without providing specific hardware models or specifications.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes We train the QNN in Figure 3 using Algorithm 1 and with ηt = α/√t and α ≈ 0.77. Our training process has no epochs in contrast to conventional SGD, since sample replication is prohibited. Therefore, to show progress during the training phase, we group samples into multiple batches, each of 100 samples.