Quantum Algorithms for Deep Convolutional Neural Networks

Authors: Iordanis Kerenidis, Jonas Landman, Anupam Prakash

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also present numerical simulations for the classification of the MNIST dataset to provide practical evidence for the efficiency of the QCNN.
Researcher Affiliation Academia Iordanis Kerenidis, Jonas Landman & Anupam Prakash Institut de Recherche en Informatique Fondamentale (IRIF) Universit e de Paris, CNRS Paris, France landman@irif.fr
Pseudocode Yes Algorithm 1 QCNN Layer
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of their described methodology.
Open Datasets Yes Numerical simulations for the classification of the MNIST dataset
Dataset Splits No This dataset is made of 60.000 training images and 10.000 testing images of handwritten digits.
Hardware Specification No However simulating this small QCNN on a classical computer was already very computationally intensive and time consuming
Software Dependencies No The experiment, using the Py Torch library developed by Paszke et al. (2017)
Experiment Setup Yes The experiment, using the Py Torch library developed by Paszke et al. (2017), consists of training classically a small convolutional neural network for which we have added a quantum sampling after each convolution. Instead of parametrising it with the precision η, we have choosed to use the sampling ratio σ that represents the fraction of pixels drawn during tomography. This two definitions are equivalent, as shown in Appendix (Section D.1.5), but the second one is more intuitive regarding the running time and the simulations. We also add a noise simulating the amplitude estimation (parameter ϵ), followed by a cap Re Lu instead of the usual Re Lu (parameter C), and a noise during the backpropagation (parameter δ).