SLIQ: Quantum Image Similarity Networks on Noisy Quantum Computers

Authors: Daniel Silver, Tirthak Patel, Aditya Ranjan, Harshitta Gandhi, William Cutler, Devesh Tiwari

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our simulations and real-computer evaluations demonstrate that SLIQ achieves a 31% point improvement in similarity detection over a baseline quantum triplet network on a real-world, unlabeled dataset (Chen, Lai, and Liu 2018), while prior state-of-the-art works in QML only perform classification and require labeled input data (Huang et al. 2021b; Silver, Patel, and Tiwari 2022).
Researcher Affiliation Academia Northeastern University {silver.da, patel.ti, ranjan.ad, gandhi.ha, cutler.wi, d.tiwari}@northeastern.edu
Pseudocode No The paper includes 'Fig. 5: Overview of the design of SLIQ', which is a diagram, but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes SLIQ is available as open-source framework at https://github.com/Silver Engineered/Sli Q.
Open Datasets Yes SLIQ is evaluated on NIH AIDS Antiviral Screen Data (Kramer, De Raedt, and Helma 2001), MNIST (Deng 2012), Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017), and Flickr Landscape (Chen, Lai, and Liu 2018).
Dataset Splits Yes In all datasets, 80% of the data is reserved for training and 20% reserved for testing.
Hardware Specification Yes SLIQ achieves a 68.8% accuracy on the AIDS dataset, running on IBM Oslo.
Software Dependencies No The environment for SLIQ is Python3 with Pennylane (Bergholm et al. 2018) and Qiskit (Aleksandrowicz et al. 2019) frameworks. While Python3 is mentioned, specific version numbers for Pennylane and Qiskit are not provided.
Experiment Setup Yes We use a batch size of 30 for all experiments with a Gradient Descent Optimizer and a learning rate of 0.01. We train for 500 epochs on a four-layer network.