Binding in hippocampal-entorhinal circuits enables compositionality in cognitive maps

Authors: Christopher Kymn, Sonia Mazelet, Anthony Thomas, Denis Kleyko, Edward Frady, Fritz Sommer, Bruno Olshausen

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We simulate trajectories based on a statistical model for generating plausible rodent movements in an arena [35, 36], and we update grid cell and place cell state vectors according to Equations 7 and 8, respectively. To evaluate the robustness of the model to error (Appendix B.3), we consider both sources of extrinsic noise (e.g., mis-representations of velocity information), and intrinsic noise (e.g., due to noise in weight updates). The robustness of our model to intrinsic noise is tested by comparing our results to the estimated trajectories obtained without the correction by the MEC modules (Figure 6A and B).
Researcher Affiliation Collaboration 1Redwood Center for Theoretical Neuroscience, Helen Wills Neuroscience Institute, UC Berkeley, Berkeley, USA 2Université Paris-Saclay, ENS Paris-Saclay, Gif-sur-Yvette, France 3Department of Electrical and Computer Engineering, UC Davis, Davis, USA 4Centre for Applied Autonomous Sensor Systems, Örebro University, Örebro, Sweden 5Intelligent Systems Lab, Research Institutes of Sweden, Kista, Sweden 6Intel Labs, Santa Clara, USA 7Herbert Wertheim School of Optometry & Vision Science, UC Berkeley, Berkeley, USA
Pseudocode No The paper contains mathematical equations and descriptions of processes, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/Sonia Maz8/Hippocampal_enthorinal_circuit
Open Datasets Yes We consider the Fashion MNIST dataset, from which we select 105 images of size 28 28 that we binarize by setting pixel values to be 1 if below 127, and 1 elsewhere.
Dataset Splits No The paper simulates trajectories and generates random patterns for its experiments. It describes trial numbers and noise conditions for evaluation but does not specify formal training, validation, or test dataset splits in the conventional sense (e.g., 80/10/10 percentages or specific counts for pre-existing datasets).
Hardware Specification No All experiments were performed on CPU with local resources. We do not have precise estimates of the amount of compute operations required, but each individual simulation took less than 3 days of total compute time.
Software Dependencies No All experiments were implemented in Python involving standard packages for scientific computing (including Num Py, Sci Py, Matplotlib).
Experiment Setup Yes For each number of moduli, K, we seek to find the smallest dimension D for which our attractor model factorizes its input, p, into the correct grid states in a fixed time (50 iterations) with high probability (at least 99 percent empirically). In instances where the network states remain similar over time (at least 0.95 cosine similarity), we consider that it converged to a fixed point. If such convergence did not occur, we evaluate the accuracy at the last time step.