FOCUS: Familiar Objects in Common and Uncommon Settings

Authors: Priyatham Kattakinda, Soheil Feizi

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present a detailed analysis of the performance of various popular image classifiers on our dataset and demonstrate a clear drop in accuracy when classifying images in uncommon settings. We also show that finetuning a model on our dataset drastically improves its ability to focus on the object of interest leading to better generalization.
Researcher Affiliation Academia 1University of Maryland, College Park, MD, USA.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Our dataset and code for evaluating models on FOCUS are available at https://github.com/priyathamkat/focus.
Open Datasets Yes Our dataset and code for evaluating models on FOCUS are available at https://github.com/priyathamkat/focus.
Dataset Splits No We start by randomly splitting the dataset into train and test sets, which are 70% and 30% of the dataset in size, respectively.
Hardware Specification No The paper does not specify the hardware used for experiments, such as particular GPU or CPU models.
Software Dependencies No The paper mentions software like PyTorch, EfficientNet-PyTorch, CLIP, and timm, but does not provide specific version numbers for any of them.
Experiment Setup Yes We use SGD with a learning rate of 1e-4 to update the last layer (fully-connected layer) of each model for 10 epochs of the train split.