Weight-covariance alignment for adversarially robust neural networks
Authors: Panagiotis Eustratiadis, Henry Gouk, Da Li, Timothy Hospedales
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method on a number of popular benchmarks, show that it can be applied to different architectures, and that it provides robustness to a variety of white-box and black-box attacks, while being simple and fast to train compared to existing alternatives. |
| Researcher Affiliation | Collaboration | 1University of Edinburgh 2Samsung AI Center, Cambridge. |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | Datasets For comparison against the current state-of-the-art and for our ablation study we use four benchmarks: CIFAR10, CIFAR-100 (Krizhevsky et al., 2009), SVHN (Netzer et al., 2011) and Fashion-MNIST (Xiao et al., 2017). ... Imagenette2, a subset of Image Net... (https://github.com/fastai/imagenette) ... mini-Image Net (Vinyals et al., 2016) |
| Dataset Splits | No | CIFAR-10 and CIFAR-100 contain 60K 32x32 color images, 50K for training and 10K for testing, evenly spread across 10 and 100 classes respectively. SVHN... 73K for training and 26K for testing. Fashion-MNIST... 60K for training and 10K for testing... The paper specifies training and testing set sizes but does not explicitly mention a separate 'validation' split size for reproducibility. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers like Python 3.8, PyTorch 1.9) needed to replicate the experiment. |
| Experiment Setup | Yes | The two hyperparameters of note across all of our experiments are the learning rate and ℓ2 penalty (i.e., weight decay), the exact values of which are provided in the supplementary material. |