Neural Embeddings Rank: Aligning 3D latent dynamics with movements

Authors: Chenggang Chen, Zhiyu Yang, Xiaoqin Wang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply NER and six other dimensionality reduction techniques to neurons in the primary motor cortex (M1), dorsal premotor cortex (PMd), and primary somatosensory cortex (S1) as monkeys perform reaching tasks. Only NER aligns latent dynamics with both hand position and direction, visualizable in 3D. NER reveals consistent latent dynamics in M1 and PMd across sixteen sessions over a year. Using a linear regression decoder, NER explains 86% and 97% of the variance in velocity and position, respectively. Linear models trained on data from one session successfully decode velocity, position, and direction in held-out test data from different dates and cortical areas (64%, 88%, and 90%).
Researcher Affiliation Academia Chenggang Chen, Zhiyu Yang, Xiaoqin Wang Department of Biomedical Engineering, Johns Hopkins University cheng-gang.chen@jhu.edu
Pseudocode No The paper describes the mathematical formulation of NER and contrasts it with CEBRA, but it does not present a formal pseudocode block or algorithm figure.
Open Source Code Yes The code is available at https://github.com/Neuroscience AI/NER.
Open Datasets Yes All the center-out reaching experiments using the open source data from: https://datadryad.org/stash/dataset/doi:10.5061/dryad.xd2547dkt This data is released accompanying this paper:https://elifesciences.org/articles/73155#data We used all sessions from Monkey Chewie and Monkey Han. The MC_Maze dataset includes recordings from the primary motor and dorsal premotor cortices of a monkey performing reaches to visual targets in a virtual maze with an instructed delay. This datasets could be downloaded from: https://dandiarchive.org/dandiset/ 000128, https://github.com/dandisets/000128.
Dataset Splits No The paper specifies using '80% training data' and '20% held-out test data' but does not explicitly mention a separate validation split percentage for hyperparameter tuning or early stopping. Therefore, it does not provide specific details for all three (train/validation/test) splits.
Hardware Specification Yes Operating System is Ubuntu 22.04.3 LTS, computer memory is 42 GB, CPU is Intel Xeon W-2225, and GPU is NVIDIA RTX A5000.
Software Dependencies No The paper mentions the operating system (Ubuntu 22.04.3 LTS) and uses libraries like scipy and sklearn, but it does not specify version numbers for these Python libraries, which are key software components.
Experiment Setup Yes The iterations is 20000, learning rate is 1e-4, and batch size 512. The temperature is fixed to 1 for both NER and CEBRA. The output dimension of NER is fixed to 3. For CEBRA, we used output dimension of 3 for visualizing the latent dynamics and linear models decoding. We only used 16 dimensional embeddings for k-NN decoders. We fixed the random seed to 42, and using batch size of 200 and iterations of 300 (for pi VAE). The iterations is 5000, learning rate is 1e-4, and batch size 512 (for curved movements).