Visual Search Asymmetry: Deep Nets and Humans Share Similar Inherent Biases

Authors: Shashi Kant Gupta, Mengmi Zhang, CHIA-CHIEN WU, Jeremy Wolfe, Gabriel Kreiman

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compared the model against human behavior in six paradigmatic search tasks that show asymmetry in humans. Without prior exposure to the stimuli or task-specific training, the model provides a plausible mechanism for search asymmetry. We tested this hypothesis by training the model on augmented versions of Image Net where the biases of natural images were either removed or reversed. The polarity of search asymmetry disappeared or was altered depending on the training protocol.
Researcher Affiliation Academia 1Indian Institute of Technology Kanpur, India 2Children s Hospital, Harvard Medical School 3Center for Brains, Minds and Machines 4Brigham and Women s Hospital, Harvard Medical School
Pseudocode No The paper describes the model architecture and mathematical formulations but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes All source code and data are publicly available at https: //github.com/kreimanlab/Visual Search Asymmetry.
Open Datasets Yes Importantly, the model was pre-trained for object classification on Image Net and was not trained with the target or search images, or with human visual search data. [...] We trained ecc NET on MNIST [13], which contains grayscale images of hand-written digits.
Dataset Splits Yes Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Appendix G.
Hardware Specification Yes Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Appendix G
Software Dependencies No The paper mentions 'Tensor Flow Keras [1]' but does not provide specific version numbers for these software components.
Experiment Setup Yes Based on the slope of eccentricity versus receptive field size in the macaque visual cortex [16], we experimentally set γ3 = 0.00, γ6 = 0.00, γ10 = 0.14, γ14 = 0.32, and γ18 = 0.64. [...] As in the stride size in the original pooling layers of VGG16, we empirically set a constant stride of 2 pixels for all eccentricity-dependent pooling layers. [...] we empirically selected l = 9, 13, 17 as the layers where top-down modulation is performed