Adaptive Contextual Perception: How To Generalize To New Backgrounds and Ambiguous Objects

Authors: Zhuofan Ying, Peter Hase, Mohit Bansal

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental First, we analyze model performance in these two different OOD settings and demonstrate that models that excel in one setting tend to struggle in the other. We then analyze model performance in these two different OOD settings and demonstrate that models that excel in one setting tend to struggle in the other. We conduct our analysis and experiments on two standard benchmarks, COLOROBJECT and SCENEOBJECT [38, 51]. We train a total of 340 models, 170 for each of COLOROBJECT, and SCENEOBJECT datasets.
Researcher Affiliation Academia Zhuofan Ying1,2 Peter Hase1 Mohit Bansal1 1UNC Chapel Hill 2Columbia University {zfying, peter, mbansal}@cs.unc.edu
Pseudocode No The paper describes methods and processes in text and mathematical formulas but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at: https://github.com/zfying/Adaptive Context
Open Datasets Yes We conduct our analysis and experiments on two standard benchmarks, COLOROBJECT and SCENEOBJECT [38, 51]. Our datasets are created using MSCOCO and Places [26, 54], both under the CC BY 4.0 license.
Dataset Splits Yes There are 16000 images for training, 2000 for validation, and 2000 for each of the test sets.
Hardware Specification Yes All experiments are conducted using Nvidia RTX 2080 Ti.
Software Dependencies No The paper mentions general software like "Wide Res Net" (a model architecture) and that simple MLPs are trained on CPUs, but it does not specify software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, CUDA x.x).
Experiment Setup Yes To adapt to the 64 by 64 image size, we change the average pooling layer window size from 8 to 16. The learning rate is set to 0.1, which is by a factor of 10 for every 1500 update steps. The model is updated with 4000 steps in total. The batch sizes are 128 and 64 for COLOROBJECT and SCENEOBJECT respectively.