Design Principles of the Hippocampal Cognitive Map

Authors: Kimberly L Stachenfeld, Matthew Botvinick, Samuel J Gershman

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our simulations we feed the agent the optimal policy; in the Supplementary Materials we discuss algorithms for policy improvement. In this section, we explore the neural implications of using the SR for policy evaluation: if the brain encoded the SR, what would the receptive fields of the encoding population look like, and what would the population look like at any point in time? This question is most easily addressed in spatial domains, where states index spatial locations (see Supplementary Materials for simulation details). we simulated place fields in a version of the Tolman detour task [46], as described in [1]. Fig. 6 shows a simulation of this process and how it accounts for the facilitation effect.
Researcher Affiliation Academia 1Princeton Neuroscience Institute and Department of Psychology, Princeton University 2Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
Pseudocode No The paper describes algorithms using mathematical equations and text, but it does not include any explicitly labeled pseudocode blocks or algorithm figures.
Open Source Code No The paper does not explicitly state that source code for the described methodology is released, nor does it provide any links to a code repository. It mentions 'Supplementary Materials for simulation details' but without a concrete access statement for the code.
Open Datasets No The paper does not mention using publicly available datasets with specific access information (links, DOIs, or formal citations with authors/year). It describes simulations in various synthetic environments, and refers to 'Supplementary Materials for simulation details' without specifying dataset availability.
Dataset Splits No The paper discusses simulations and theoretical properties of the model, but it does not provide specific details regarding training, validation, or test dataset splits. The simulations are exploratory and illustrative, not based on traditional dataset splitting for model training/evaluation.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU, CPU models, memory, or cloud instances) used to run the simulations or experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, or specific solvers). It only mentions a learning rate for simulations.
Experiment Setup Yes Our focus here is on policy evaluation (computing V ). In our simulations we feed the agent the optimal policy; in the Supplementary Materials we discuss algorithms for policy improvement. To simplify notation, we assume implicit dependence on π and define the state transition matrix T, where T(s, s ) = P a π(a|s)P(s |s, a). where η is a learning rate (unless specified otherwise, η = 0.1 in our simulations).