Generating Universal Adversarial Perturbations for Quantum Classifiers
Authors: Gautham Anil, Vishnu Vinod, Apurva Narayan
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the performance of the proposed framework and show that our method achieves state-of-the-art misclassification rates, while maintaining high fidelity between legitimate and adversarial samples." and "We propose a strategy for generating additive UAPs using classical generative models and conduct experiments to validate the viability of the proposed approach. |
| Researcher Affiliation | Academia | 1Indian Institute of Technology Madras 2University of British Columbia 3University of Western Ontario 4University of Waterloo {gauthamga.gga,vishnuvinod2001}@gmail.com, apurva.narayan@uwo.ca |
| Pseudocode | Yes | More details regarding the TIM dataset as well as the complete pseudocode for classical optimization are given in the supplementary. |
| Open Source Code | Yes | All source code used for this research may be found at: https://github.com/Idslgroup/Qu GAP along with links to the supplementary material. |
| Open Datasets | Yes | We test the generative framework by attacking quantum classifiers of different depths trained on two tasks: binary classification and four-class classification, and two datasets: MNIST (Le Cun, Cortes, and Burges 2010) and FMNIST (Xiao, Rasul, and Vollgraf 2017)." and "The synthetic TIM dataset maps the states of the transverse-field Ising model described in (Pfeuty 1970) to the phase of the system (ferromagnetic or paramagnetic). We model this physical system as a binary classification task on pure quantum data. More details regarding the TIM dataset as well as the complete pseudocode for classical optimization are given in the supplementary. |
| Dataset Splits | No | The paper refers to "training procedure and hyperparameters used" in the supplementary material but does not explicitly state specific training/validation/test dataset splits (e.g., percentages or sample counts) in the main text. |
| Hardware Specification | No | The paper mentions 'computational limitations' and 'access to computational resources' provided by the Digital Research Alliance of Canada, but it does not specify any concrete hardware details such as exact GPU or CPU models used for experiments. |
| Software Dependencies | No | The paper mentions that a PQC-based quantum generative model was 'implemented using the Pennylane library' but does not specify the version number for Pennylane or any other software dependencies. |
| Experiment Setup | No | The paper states that 'Additional details such as the structure of G, hyperparameters for training and software packages used as well as experiments for targeted attacks are detailed in the supplementary', but these specific experimental setup details (e.g., learning rate, batch size, number of epochs) are not provided in the main text. |