Power-law efficient neural codes provide general link between perceptual bias and discriminability

Authors: Michael Morais, Jonathan W. Pillow

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In simulations, we explore a range of SNRs and power-law efficient codes to verify these results, and examine a variety of decoders including posterior mode, median, and mean estimators (Section 5), demonstrating the universality of the bias-discriminability relationship across a broad space of models. We used simulated data to test our derived nonlinear and linear relationships between bias and discriminability (eqs. 12 & 13).
Researcher Affiliation Academia Michael J. Morais & Jonathan W. Pillow Princeton Neuroscience Institute & Department of Psychology Princeton University mjmorais, pillow@princeton.edu
Pseudocode No No pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not provide any specific links or explicit statements about releasing source code for the methodology described.
Open Datasets No The paper describes generating 'random smooth priors' for simulations ('As such, we draw random priors as exponentiated draws from Gaussian processes on [−π, π], according to Z exp(f), where f ∼ GP(0, K)') rather than using a pre-existing publicly available dataset, and does not provide access information for this generated data.
Dataset Splits No The paper does not provide specific train/validation/test dataset split information. It discusses using 'simulated data' and 'random priors' for its analysis without specifying formal data partitions for training, validation, or testing.
Hardware Specification No The paper does not provide any specific hardware details (e.g., GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes In all simulations, we propagate each stimulus x ∼ p(x) on a finely tiled grid through a Bayesian observer model numerically, computing a posterior p(x|y) / p(x)N(y; x, kp(x) q) for a power-law efficient code under many powers q and SNRs k, and for each computed the Bayesian estimators associated with various loss functions of interest. [...] As such, we draw random priors as exponentiated draws from Gaussian processes on [−π, π], according to Z exp(f), where f ∼ GP(0, K) for Z as a normalizing constant, and K the radial basis function kernel wherein K(xi, xj) = 2 exp(− 1 2σ2kxixjk2) with magnitude = 1 and lengthscale = 0.75, selected such that a typical prior was roughly bimodal.