Encoding Robustness to Image Style via Adversarial Feature Perturbations

Authors: Manli Shu, Zuxuan Wu, Micah Goldblum, Tom Goldstein

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In section 5.1, we evaluate our method on image classification. We measure the generalization of models on Image Net variant datasets that features different distributional shifts. In Table 1, we evaluate the performance of models on the four variant datasets.
Researcher Affiliation Academia 1 University of Maryland, College Park, USA 2 Fudan University, Shanghai, China 3 New York University, New York, USA
Pseudocode Yes Algorithm 1: Training with Adversarial Batch Normalization
Open Source Code Yes The source code and pre-trained models are released at https://github.com/azshue/Adv BN.
Open Datasets Yes Image Net-C [12] (under the Apache License 2.0) ... Image Net-Instagram (Image Net-Ins.) [45] ... Image Net-Sketch [44] (under the MIT License) ... Stylized Image Net (Image Net-Style) [6] (under the MIT License)
Dataset Splits Yes We use the left-front view images of each sub-dataset, and split dataset by randomly selecting 900 images for training and 500 for validation.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU models, CPU types, or cloud instance specifications.
Software Dependencies No The paper mentions general software components like ResNet-50 and SGD, and implies PyTorch through a linked repository, but it does not specify version numbers for any key software dependencies.
Experiment Setup Yes The learning rate starts at 0.001 and decreases by a factor of 10 after 10 epochs. Our batch size is set to 256. We use SGD with a momentum of 0.9 and weight decay coefficient 10 4. ... fine-tuning on a given sub-dataset for 30 epochs, with adversarial training parameters τ = 0.2, ϵ = 0.5, and 3 repeats.