Understanding Neural Coding on Latent Manifolds by Sharing Features and Dividing Ensembles
Authors: Martin Bjerke, Lukas Schott, Kristopher T Jensen, Claudia Battistin, David A. Klindt, Benjamin Adric Dunn
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 EXPERIMENTS, 4.1 SIMULATIONS: FEATURE SHARING, 4.2 SIMULATIONS: ENSEMBLE DETECTION, 4.3 REAL DATASETS: HEAD DIRECTION DATA, 4.4 REAL DATASETS: GRID CELL DATA |
| Researcher Affiliation | Collaboration | 1Norwegian University of Science and Technology, 2Bosch Center for Artificial Intelligence, 3University of Cambridge |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code and implementation are included as supplementary material, and is also made available at: https://github.com/david-klindt/Neural LVM. |
| Open Datasets | Yes | Datasets are public, and details regarding model parameters, data generation, etc. are specified under the relevant sections (e.g., Sec. 4), as well as more thoroughly in the Appendix (e.g., Appx. A, B, C, J & K). (from Reproducibility Statement) and data from Peyrache et al. (2015b) and data recorded from the medial entorhinal cortex (MEC) of a freely moving rat (Gardner et al., 2022). |
| Dataset Splits | No | No explicit mention of a separate validation dataset split was found using the required phrasing. The paper mentions a 'train-test split of 95% 5%' and 'held-out data from held-out neurons' but does not define a distinct validation set. |
| Hardware Specification | No | The paper mentions running experiments 'on a CPU' or 'on a GPU' and on 'a large compute cluster' but does not specify exact hardware models, types of processors, or memory details. |
| Software Dependencies | No | The paper mentions 'Pytorch (Paszke et al., 2019)' and 'standard scikit-learn implementation' but does not provide specific version numbers for these software dependencies, which is required for reproducibility. |
| Experiment Setup | Yes | All models are trained with the Adam optimizer (Kingma & Ba, 2014), a learning rate of 0.001, temporal chunks of size 128 (64 if data length is smaller than 128) and a batch size of 1. Moreover, training is concluded after the training objective has not improved for 5 steps (10 for grid cell data). All models are implemented in Pytorch (Paszke et al., 2019). (from Appendix A) and also Table 2 in Appendix K for hyperparameter values. |