Drop, Swap, and Generate: A Self-Supervised Approach for Generating Neural Activity

Authors: Ran Liu, Mehdi Azabou, Max Dabagia, Chi-Heng Lin, Mohammad Gheshlaghi Azar, Keith Hengen, Michal Valko, Eva Dyer

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through evaluations on both synthetic data and neural recordings from hundreds of neurons in different primate brains, we show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior. We apply our method to synthetic data and publicly available non-human primate (NHP) reaching datasets from two different individuals (25). In Section 3.4, we introduce metrics to quantify the disentanglement of our representations of behavior and apply them to neural datasets from different non-human primates (Section 4) to gain insights into the link between neural activity and behavior. 4 Experiments
Researcher Affiliation Collaboration Ran Liu Georgia Tech Mehdi Azabou Georgia Tech Max Dabagia Georgia Tech Chi-Heng Lin Georgia Tech Mohammad Gheshlaghi Azar Deep Mind Keith B. Hengen Washington Univ. in St. Louis Michal Valko Deep Mind Eva L. Dyer Georgia Tech
Pseudocode No The paper describes methods using prose and mathematical equations but does not include explicit pseudocode or algorithm blocks.
Open Source Code Yes A Py Torch (39) implementation is provided here: https://nerdslab.github.io/Swap VAE/.
Open Datasets Yes We apply our method to synthetic data and publicly available non-human primate (NHP) reaching datasets from two different individuals (25).
Dataset Splits No For the synthetic dataset, the paper states: "split into 80% training set and 20% test set". For neural datasets, it states: "In our experiments, we split the dataset into 80% for train and 20% for test." A validation set split is not explicitly mentioned with percentages or counts.
Hardware Specification Yes All models are trained using one Nvidia Titan RTX GPU for 200 epochs using the Adam optimizer with a learning rate of 0.0005 (further details can be found in Appendix A).
Software Dependencies No A Py Torch (39) implementation is provided here: https://nerdslab.github.io/Swap VAE/. While PyTorch is mentioned, its version number is not explicitly stated alongside the name in the text describing the implementation or dependencies.
Experiment Setup Yes All models are trained on the training set for 100,000 iterations, and optimized using Adam with a learning rate of 0.0005. All models are trained using one Nvidia Titan RTX GPU for 200 epochs using the Adam optimizer with a learning rate of 0.0005. All models have a 128-dim latent space, which is split for our model between into 64-dim blocks for the style and content space. all generative models have an encoder and a symmetric decoder, where the encoder has three fully connected layers with size [d, 128, 128], batch normalization, and Re LU activations. All discriminative models have an encoder of 4 linear layers with size [d, 128, 128, 128].