Adversarial Reprogramming of Neural Networks
Authors: Gamaleldin F. Elsayed, Ian Goodfellow, Jascha Sohl-Dickstein
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate adversarial reprogramming on six Image Net classification models, repurposing these models to perform a counting task, as well as classification tasks: classification of MNIST and CIFAR-10 examples presented as inputs to the Image Net model. In Section 4, we experimentally demonstrate adversarial programs that target several convolutional neural networks designed to classify Image Net data. |
| Researcher Affiliation | Industry | Gamaleldin F. Elsayed Google Brain gamaleldin.elsayed@gmail.com, Ian Goodfellow Google Brain goodfellow@google.com, Jascha Sohl-Dickstein Google Brain jaschasd@google.com |
| Pseudocode | No | The paper describes the proposed method using mathematical equations and descriptive text, but it does not include any explicitly labeled pseudocode blocks or algorithms. |
| Open Source Code | No | The paper mentions obtaining trained models from "Tensor Flow-Slim" but does not provide any link or explicit statement about making the source code for their adversarial reprogramming methodology publicly available. |
| Open Datasets | Yes | We demonstrate adversarial reprogramming on six Image Net classification models, repurposing these models to perform a counting task, as well as classification tasks: classification of MNIST and CIFAR-10 examples presented as inputs to the Image Net model. The weights of all trained models were obtained from Tensor Flow-Slim. |
| Dataset Splits | Yes | We measure test and train accuracy, so it is impossible for the adversarial program to have simply memorized all training examples. Table 1: Neural networks adversarially reprogrammed to perform a variety of tasks. Table gives accuracy of reprogrammed networks to perform a counting task, MNIST classification task, CIFAR-10 classification task, and Shuffled MNIST classification task. Table Supp. 3: Hyper-parameters for adversarial program training for MNIST classification adversarial task. For all models, we used the Adam optimizer with its default parameters while decaying the learning rate exponentially during training. |
| Hardware Specification | No | The paper mentions distributing training data across "a number of GPUs" in the supplementary material for hyperparameter tables, but it does not specify the exact models or types of GPUs, nor any other specific hardware components used for the experiments. |
| Software Dependencies | No | The paper mentions obtaining models from "Tensor Flow-Slim" and using "Adam optimizer" but does not provide specific version numbers for these or any other software components. |
| Experiment Setup | Yes | Hyperparameters are given in Appendix A. Table Supp. 2: Hyper-parameters for adversarial program training for the square counting adversarial task. Table Supp. 3: Hyper-parameters for adversarial program training for MNIST classification adversarial task. These tables specify values for λ, batch size, number of GPUs, learning rate, decay, and epochs/decay steps for different models and tasks. |