A Framework for Testing Identifiability of Bayesian Models of Perception
Authors: Luigi Acerbi, Wei Ji Ma, Sethu Vijayakumar
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We examine the theoretical identifiability of the inferred internal representations in two case studies. First, we show which experimental designs work better to remove the underlying degeneracy in a time interval estimation task. Second, we find that the reconstructed representations in a speed perception task under a slow-speed prior are fairly robust. We apply our framework to two case studies: the inference of priors in a time interval estimation task (see [24]) and the reconstruction of prior and noise characteristics in speed perception [9]. here we only simulate the test sessions. |
| Researcher Affiliation | Academia | Luigi Acerbi1,2 Wei Ji Ma2 Sethu Vijayakumar1 1 School of Informatics, University of Edinburgh, UK 2 Center for Neural Science & Department of Psychology, New York University, USA |
| Pseudocode | No | The paper does not contain any structured pseudocode blocks or sections explicitly labeled as 'Algorithm' or 'Pseudocode'. |
| Open Source Code | No | The paper does not provide any explicit statement about open-source code availability, nor does it include links to a code repository. |
| Open Datasets | No | The paper describes simulating 'test sessions' and using 'experimental distribution of stimuli' defined within the model based on prior literature (e.g., 'a time interval estimation and reproduction task very similar to [24]'), but it does not provide access information (links, DOIs, or specific citations) for a publicly available dataset that they used for training or evaluation. |
| Dataset Splits | No | The paper does not explicitly provide details about training, validation, or test dataset splits (e.g., percentages or sample counts). It mentions simulating 'test sessions' but no formal data splits for model training and evaluation. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., CPU, GPU models, or memory specifications) used for running the experiments or simulations. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers required to replicate the work. |
| Experiment Setup | Yes | To cast the problem within our framework, we need first to define the reference observer θ . We make the following assumptions: (a) the observer s priors (or prior, in only one condition) are smoothed versions of the experimental uniform distributions; (b) the sensory noise is affected by the scalar property of interval timing, so that the sensory mapping is logarithmic (s0 0, d = 1); (c) we take average sensorimotor noise parameters from [24]: σ = 0.10, γ = 0, κ = 0, and ρ0 0, ρ1 = 0.07; (d) for simplicity, the internal likelihood coincides with the measurement distribution; (e) the loss function in internal measurement space is almost-quadratic, with σℓ= 0.5, γℓ= 0, κℓ= 0; (f) we assume a small lapse probability λ = 0.03; (g) in case the observer performs in two conditions, all observer s parameters are shared across conditions (except for the priors). For the inferred observer θ we allow all model parameters to change freely, keeping only assumptions (d) and (g). We compare the following variations of the experimental setup: 1. BSL: The baseline version of the experiment, the observer performs in both the Short and Long conditions (Ntr = 500 each); 2. SRT or LNG: The observer performs more trials (Ntr = 1000), but only either in the Short (SRT) or in the Long (LNG) condition; 3. MAP: As BSL, but we assume a difference in the performance feedback of the task such that the reference observer adopts a narrower loss function, closer to MAP (σℓ= 0.1); 4. MTR: As BSL, but the observer s motor noise parameters ρ0, ρ1 are assumed to be known (e.g. measured in a separate experiment), and therefore fixed during the inference. |