Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

On Conformal Isometry of Grid Cells: Learning Distance-Preserving Position Embedding

Authors: Dehong Xu, Ruiqi Gao, Wenhao Zhang, Xue-Xin Wei, Yingnian Wu

ICLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct numerical experiments to show that this hypothesis leads to the hexagonal grid firing patterns by learning maximally distance-preserving position embedding, agnostic to the choice of the recurrent neural network. Furthermore, we present a theoretical explanation of why hexagon periodic patterns emerge by minimizing our loss function by showing that hexagon flat torus is maximally distance preserving.
Researcher Affiliation Academia Dehong Xu1, Ruiqi Gao1, Wen-Hao Zhang2, Xue-Xin Wei3, Ying Nian Wu1 1UCLA 2UT Southwestern Medical Center 3UT Austin
Pseudocode No The paper describes mathematical formulations, definitions, theorems, and proofs, but it does not contain any explicitly labeled pseudocode or algorithm blocks with structured steps.
Open Source Code Yes Project page: https://github.com/Dehong Xu/grid-cell-conformal-isometry
Open Datasets Yes To further investigate conformal isometry, we perform analysis on real neural recordings using data from Gardner et al. (2021)
Dataset Splits No The paper describes generating data on a "40x40 regular lattice" and using "Monte Carlo samples" for training, rather than specifying train/test/validation splits from a fixed dataset. For neural data analysis, it mentions using "data from one module" but no specific splits.
Hardware Specification Yes All the models were trained on a single 2080 Ti GPU for 200, 000 iterations with a learning rate of 0.003.
Software Dependencies No The paper mentions "Adam optimizer (Kingma & Ba, 2014)" but does not provide specific version numbers for any software libraries, frameworks, or programming languages used.
Experiment Setup Yes All the models were trained on a single 2080 Ti GPU for 200, 000 iterations with a learning rate of 0.003. For batch size, we generated 4000 simulated data for each iteration. ... We minimize L = L1 + λL2 over the set of v(x) on the 40x40 lattice points as well as the parameters in F, such as B(θ) for the discrete set of directions θ in the linear model. λ > 0 is a hyper-parameter that balances L1 and L2. ... The dimensions of v(x), representing the total number of grid cells, were nominally set to 24 for both the linear model and nonlinear model 1, and 1000 for nonlinear model 2. ... For x in L1, we constrained it locally, ensuring s x 1.25. For L2, x was restricted to be smaller than 0.075.