A probabilistic population code based on neural samples

Authors: Sabyasachi Shivkumar, Richard Lange, Ankani Chattoraj, Ralf Haefner

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The top row of Figure 2 shows a numerical approximation to the posterior over s for the finite sample case and illustrates its convergence for t for the example model described in the previous section. As expected, posteriors for small numbers of samples are both wide and variable, and they get sharper and less variable as the number of samples increases (three runs are shown for each condition). Since the mean samples ( x) only depends on the marginals over x, we can approximate it using the mean field solution for our image model. The bottom row of Figure 2 shows the corresponding population responses: spike count for each neurons on the y axis sorted by the preferred stimulus of each neuron on the x axis.
Researcher Affiliation Academia Sabyasachi Shivkumar , Richard D. Lange , Ankani Chattoraj , Ralf M. Haefner Brain and Cognitive Sciences, University of Rochester {sshivkum, rlange, achattor, rhaefne2}@ur.rochester.edu
Pseudocode No The paper contains mathematical derivations and descriptions, but no explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statements about releasing open-source code or links to a code repository.
Open Datasets No The paper mentions models being 'trained on natural images [14, 6]' in a general sense, referring to past work. However, it does not specify a particular public dataset used for the 'numerical approximation' and 'simulations' presented in this paper, nor does it provide concrete access information (link, DOI, specific citation) for a dataset used in their own work.
Dataset Splits No The paper does not provide information on training, validation, or test splits for any dataset used in its simulations or numerical approximations.
Hardware Specification No The paper does not specify any hardware (e.g., GPU, CPU models, or computational infrastructure) used for its simulations or numerical approximations.
Software Dependencies No The paper does not provide specific software dependencies or version numbers needed to replicate the work.
Experiment Setup No The paper describes mathematical models and derivations, along with numerical approximations and simulations. However, it does not provide specific experimental setup details such as hyperparameter values, optimization settings, or other configuration parameters commonly found in empirical machine learning papers.