CLPA: Clean-Label Poisoning Availability Attacks Using Generative Adversarial Nets
Authors: Bingyin Zhao, Yingjie Lao9162-9170
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of our approach on CIFAR-10 and Image Net dataset over a variety type of models. |
| Researcher Affiliation | Academia | Department of Electrical and Computer Engineering, Clemson University, SC, 29634, USA {bingyiz, ylao}@clemson.edu |
| Pseudocode | Yes | Algorithm 1: Poisoned Data Generation and Algorithm 2: Negative Class Selection |
| Open Source Code | Yes | Codes are available at: https://github.com/bxz9200/CLPA. |
| Open Datasets | Yes | We demonstrate the effectiveness of our approach on CIFAR-10 and Image Net dataset |
| Dataset Splits | No | The paper states: "we train the downstream classifiers using 5000 training images from the training dataset combined with poisoned data" and "The performance is evaluated on the test dataset of 10000 images." It does not explicitly mention a separate validation dataset split. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory amounts used for running the experiments. |
| Software Dependencies | No | The paper mentions software components and models (e.g., "conditional GAN", "Big GAN", "SGD optimizer", "Inception-V3 model") but does not provide specific version numbers for any of them. |
| Experiment Setup | Yes | The neural networks are trained 10 epochs for end-to-end training and FC layer only. ... SGD optimizer is used for model training at a learning rate of 1 10 4 with a batch size of 16." (from section "Evaluation on CIFAR-10") and "Training iterations of phase I", "Training iterations of phase II", "Margin parameter α" in Table 2. |