Spectral Signatures in Backdoor Attacks
Authors: Brandon Tran, Jerry Li, Aleksander Madry
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the efficacy of these signatures in detecting and removing poisoned examples on real image sets and state of the art neural network architectures. |
| Researcher Affiliation | Collaboration | Brandon Tran EECS MIT Cambridge, MA 02139 btran@mit.edu Jerry Li Simons Institute Berkeley, CA 94709 jerryzli@berkeley.edu Aleksander M adry EECS MIT madry@mit.edu ... J.L. was supported by ... and an intern at Google Brain. ... A.M. was supported in part by ... a Google Research Award... |
| Pseudocode | Yes | More detailed pseudocode is provided in Algorithm 1. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing its source code or a link to a code repository. |
| Open Datasets | Yes | On CIFAR-10, which contains 5000 images for each of 10 labels... We study backdoor poisoning attacks on the CIFAR10 [19] dataset... |
| Dataset Splits | No | The paper mentions training and test sets (e.g., 'original test set', 'backdoored test set' for CIFAR-10) but does not specify validation splits or detailed percentages for data partitioning. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used to run its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9'). |
| Experiment Setup | No | The paper describes the model architecture (Res Net with specific layer/filter details) and attack parameters (shape, position, color, epsilon), but does not provide specific training hyperparameters such as learning rate, batch size, or number of epochs. |