Salient ImageNet: How to discover spurious features in Deep Learning?
Authors: Sahil Singla, Soheil Feizi
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We apply our proposed methodology to the Imagenet dataset: we conducted a Mechanical Turk study using 232 classes of Imagenet... For various standard models (Resnet-50, Wide-Resnet-50-2, Efficientnet-b4, Efficientnet-b7), we evaluate their accuracy drops due to corruptions in spurious or core regions... |
| Researcher Affiliation | Academia | Sahil Singla & Soheil Feizi University of Maryland, College Park {ssingla,sfeizi}@umd.edu |
| Pseudocode | No | The paper does not contain explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | Code and dataset for reproducing all experiments in the paper is available at https://github.com/singlasahil14/salient_imagenet. |
| Open Datasets | Yes | We apply our proposed methodology to the Imagenet dataset: we conducted a Mechanical Turk study using 232 classes of Imagenet... Using this methodology, we introduce the Salient Imagenet dataset containing core and spurious masks for a large set of samples from Imagenet... The dataset and anonymized Mechanical Turk study results are also available at the associated github repository. |
| Dataset Splits | No | The paper uses pre-trained models and discusses selecting images from the Imagenet training set and validation set, but does not provide explicit training, validation, and test dataset splits for reproducing a model's training process within their framework. |
| Hardware Specification | No | The paper does not specify the hardware, such as GPU or CPU models, used to run the experiments. |
| Software Dependencies | No | The paper implies the use of Python libraries such as OpenCV and NumPy through code snippets, but it does not specify exact version numbers for any software dependencies. |
| Experiment Setup | Yes | We use σ = 0.25 (equation 1)... For optimization, we use gradient ascent with step size = 40, number of iterations = 25 and ρ = 500. |