Texture Interpolation for Probing Visual Perception

Authors: Jonathan Vacher, Aida Davila, Adam Kohn, Ruben Coen-Cagli

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply our method by measuring the perceptual scale associated to the interpolation parameter in human observers, and the neural sensitivity of different areas of visual cortex in macaque monkeys. In psychophysics, we use Maximum Likelihood Difference Scaling (MLDS [29, 32]) to measure the perceptual scale of the interpolation weight (i.e. the position of the interpolated texture on the geodesic joining two textures). In neurophysiology, we study the tuning of visual cortical neurons in areas V1 and V4 to interpolation between a naturalistic texture and a spectrally matched Gaussian texture.
Researcher Affiliation Academia Jonathan Vacher Albert Einstein College of Medicine Dept. of Systems and Comp. Biology 10461 Bronx, NY, USA jonathan.vacher@ens.fr; Aida Davila Albert Einstein College of Medicine Dominick P. Purpura Dept. of Neuroscience 10461 Bronx, NY, USA adavila@mail.einstein.yu.edu; Adam Kohn Ruben Coen-Cagli Albert Einstein College of Medicine Dept. of Systems and Comp. Biology, and Dominick P. Purpura Dept. of Neuroscience 10461 Bronx, NY, USA adam.kohn@einsteinmed.org ruben.coen-cagli@einsteinmed.org
Pseudocode No The paper describes mathematical formulations and procedures but does not include any explicit pseudocode blocks or algorithms labeled as such.
Open Source Code Yes We provide code to perform texture synthesis and interpolation that can be run using a simple command line on a computer with a nvidia GPU or CPUs only1. 1https://github.com/Jonathan Vacher/texture-interpolation
Open Datasets Yes We used 32 natural textures from the dataset of [7] and 32 natural images from BSD [1].
Dataset Splits No The paper mentions “cross-validated (5 folds) classification performance” for neural decoding analysis but does not specify training, validation, or test dataset splits for the main texture synthesis/interpolation models or for the overall experimental reproduction.
Hardware Specification No The paper states that the code “can be run using a simple command line on a computer with a nvidia GPU or CPUs only,” but it does not provide specific models or detailed specifications for the GPUs, CPUs, or any other hardware used for the experiments.
Software Dependencies No The paper mentions software like “scikit-learn [37],” “psychtoolbox [28],” and “jspsych [9]” but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes All stimuli had an average luminance of 128 (range [0, 255]) and an RMS contrast of 39.7. For each texture pair, we use 11 equally spaced (δt = 0.1) interpolation weights. Monitor gamma was corrected to 1 assuming the standard value of 2.2. Textures were interpolated between synthesized naturalistic textures (t = 1) and their spectrally matched Gaussian counterpart (t = 0) at 5 different weights (t = 0.0, 0.3, 0.5, 0.7, 1.0). All stimuli had their luminance normalized as in the MLDS experiment and were presented at 5 different sizes (2 , 4 , 6 , 8 , 10 ) on a CRT monitor. A successful trial consisted of the subject maintaining fixation over a central 1.4 window for 1.3 seconds. During this time we presented a sequence of 3 textures displayed for 300-ms each and immediately followed by a 100-ms blank screen.