Adversarial Reprogramming Revisited
Authors: Matthias Englert, Ranko Lazic
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We initiate a theoretical study of adversarial reprogramming. In the experimental part of our work, we demonstrate that, as long as batch normalisation layers are suitably initialised, even untrained networks with random weights are susceptible to adversarial reprogramming. |
| Researcher Affiliation | Academia | Matthias Englert University of Warwick m.englert@warwick.ac.uk Ranko Lazi c University of Warwick r.s.lazic@warwick.ac.uk |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | We are making code to run the experiments available at https://github.com/englert-m/adversarial_reprogramming. |
| Open Datasets | Yes | We use the same dataset, which consists of 60,000 training images and 10,000 test images, for our experiments. It is available under the Creative Commons Attribution Share-Alike 3.0 licence. |
| Dataset Splits | No | The paper states the use of 60,000 training images and 10,000 test images, but does not explicitly mention a separate validation set split for hyperparameter tuning or early stopping during their experiments. |
| Hardware Specification | Yes | The experiments were mainly run on two internal clusters utilising a mix of NVIDIA GPUs such as Ge Force RTX 3080 Ti, Quadro RTX 6000, Ge Force RTX 3060, Ge Force RTX 2080 Ti, and Ge Force GTX 1080. |
| Software Dependencies | Yes | We use the networks exactly as implemented in Keras in Tensor Flow 2.8.1 |
| Experiment Setup | Yes | We use the 60,000 training images to run an Adam optimiser [Kingma and Ba, 2015] with learning rate 0.01 and a batch size of 50 to optimise the unconstrained weights of the adversarial program. |