Passive attention in artificial neural networks predicts human visual selectivity

Authors: Thomas Langlois, Haicheng Zhao, Erin Grant, Ishita Dasgupta, Tom Griffiths, Nori Jacoby

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Using data from 79 new experiments and 7,810 participants, we show that passive attention techniques reveal a significant overlap with human visual selectivity estimates derived from 6 distinct behavioral tasks including visual discrimination, spatial localization, recognizability, free-viewing, cued-object search, and saliency search fixations.
Researcher Affiliation Collaboration Thomas A. Langlois1,a,b, H. Charles Zhao1,a, Erin Grantc, Ishita Dasguptad, Thomas L. Griffiths2,a,e, and Nori Jacoby2,b 1T.A.L. and H.C.Z. contributed equally to this work. 2T.L.G. and N.J. contributed equally to this work. a Department of Computer Science, Princeton University b Computational Auditory Perception Research Group, Max Planck Institute for Empirical Aesthetics c Department of Electrical Engineering and Computer Sciences, UC Berkeley d Deep Mind, New York e Department of Psychology, Princeton University
Pseudocode No The paper describes methods in detail but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about making the source code available or include links to a code repository.
Open Datasets Yes All models were pretrained on Image Net 2012 [38], CIFAR-100 [39], or Places365-Standard [40].
Dataset Splits Yes In a separate analysis, we repeated the smoothing parameter fitting using split-half cross-validation, and found that performance of smoothed hold-out test set maps using smoothing parameters fit to random training set maps produced nearly identical ranges in peak correlations to the human PC (between r = 0.73 and r = -0.01 for the training set, and between r = 0.72 and r = -0.04 for the testing set), as well as a nearly identical rank order in peak correlations to the human PC.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions general tools and models (e.g., PyTorch via implication of common ANN frameworks) but does not specify version numbers for any software dependencies.
Experiment Setup No The paper does not explicitly detail hyperparameter values, specific training configurations, or system-level settings used in their experiments.