Sufficient conditions for offline reactivation in recurrent neural networks
Authors: Nanda H Krishna, Colin Bredenberg, Daniel Levenstein, Blake Aaron Richards, Guillaume Lajoie
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our findings using numerical experiments on two canonical neuroscience tasks: spatial position estimation based on self-motion cues, and head direction estimation based on angular velocity cues. Overall, our work provides theoretical support for modeling offline reactivation as an emergent consequence of task optimization in noisy neural circuits. |
| Researcher Affiliation | Academia | Nanda H Krishna1,2,B Colin Bredenberg1,2,B Daniel Levenstein1,3 Blake Aaron Richards1,3,4,5 Guillaume Lajoie1,2,4,B 1Mila Quebec AI Institute 2Université de Montréal 3Mc Gill University 4Canada CIFAR AI Chair 5CIFAR Learning in Machines & Brains B{nanda.harishankar-krishna,colin.bredenberg,guillaume.lajoie}@mila.quebec |
| Pseudocode | No | The paper presents mathematical equations and describes algorithms in prose but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available on Git Hub at https://github.com/nandahkrishna/RNNReactivation. |
| Open Datasets | No | The paper describes generating its own data based on models like Erdem & Hasselmo (2012) and the Rat In ABox package. It does not provide access information (link, DOI, specific repository) for the generated datasets themselves, nor does it explicitly state the use of a named public dataset with access details. |
| Dataset Splits | No | The paper mentions 'Test metrics as a function of training batches' and 'active' and 'quiescent' phases, but it does not specify explicit training/validation/test dataset splits by percentage or sample counts for reproducibility. It discusses training and evaluating, but not a distinct validation split. |
| Hardware Specification | No | The authors also acknowledge the support of computational resources provided by Mila (https://mila.quebec) and NVIDIA that enabled this research. This statement does not provide specific hardware models (e.g., GPU/CPU models, memory) used for the experiments. |
| Software Dependencies | Yes | To compute KDEs, we used the stats.gaussian_kde() method from scipy (Virtanen et al., 2020), with all hyperparameters set to their default values. using an implementation similar to the Rat In ABox package (George et al., 2024). |
| Experiment Setup | Yes | Table A.1: Hyperparameters for the spatial position estimation task. Table A.2: Hyperparameters for the head direction estimation task. These tables specify values for network architecture, time constants, noise levels, batch size, number of batches, optimizer, and learning rate. |