Leveraging Generative Models for Unsupervised Alignment of Neural Time Series Data

Authors: Ayesha Vermani, Il Memming Park, Josue Nassar

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our approach on simulations and show the efficacy of the alignment on neural recordings from the motor cortex obtained during a reaching task. We empirically validate our method on synthetic experiments and test it on neural recordings obtained from the primary motor cortex (M1) of two monkeys during a center out reaching task (Dyer et al., 2017).
Researcher Affiliation Collaboration Champalimaud Centre for the Unknown, Champalimaud Foundation, Portugal Ryvivy R, USA {ayesha.vermani, memming.park}@research.fchampalimaud.org josue.nassar@ryvivyr.com
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes The corresponding code is available at https://github.com/ayeshav/ align-seqvae.
Open Datasets Yes We applied our method to motor cortex recordings from two monkeys (M and C) during a delayed center out reaching task(see (Dyer et al., 2017) for details).
Dataset Splits No The paper mentions using a 'held-out test set' for evaluation and specific amounts of trajectories for training (e.g., '500 trajectories' or '1,000 trajectories'), but it does not provide explicit details on the training, validation, and test dataset splits (e.g., percentages, exact counts for each split, or a clear validation split).
Hardware Specification No The paper does not provide specific details on the hardware used for running experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions using 'Adam to optimize the model' and 'bi-directional GRU' but does not specify version numbers for any software, libraries, or frameworks used in the experiments.
Experiment Setup Yes Unless stated otherwise, we used a weight decay of 10 4 and a learning rate of 1e 3. For all experiments, the seq VAE encoder was parametrized by a bi-directional GRU with 64 hidden units. The latent dynamics were modeled as pθ(xt | xt 1) = N(fθ(xt 1, Q) where fθ was a two-layer MLP with a width of 256 and tanh activations.