Towards robust and generalizable representations of extracellular data using contrastive learning
Authors: Ankit Vishnubhotla, Charlotte Loh, Akash Srivastava, Liam Paninski, Cole Hurwitz
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our method across multiple high-density extracellular recordings. All code used to run CEED can be found at https://github.com/ankitvishnu23/CEED. (Lines 9-11) and we find that CEED outperforms both PCA and the non-linear autoencoder using raw waveforms or denoised waveforms across all three datasets introduced in Section 4. (Lines 285-287) |
| Researcher Affiliation | Collaboration | Ankit Vishnubhotla Columbia University New York av3016@columbia.edu Charlotte Loh MIT Massachusetts cloh@mit.edu Liam Paninski Columbia University New York liam@stat.columbia.edu Akash Srivastava MIT-IBM Massachusetts Akash.Srivastava@ibm.com Cole Hurwitz Columbia University New York ch3676@columbia.edu (Lines 1-5) |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | All code used to run CEED can be found at https://github.com/ankitvishnu23/CEED. (Lines 10-11) |
| Open Datasets | Yes | To train and evaluate our model, we make use of two publicly available extracellular recordings published by the International Brain Laboratory (IBL): the DY016 and DY009 recordings [54]. (Lines 216-218) |
| Dataset Splits | Yes | The first dataset was extracted from the DY016 extracellular recording. It consisted of a 10 unit train and test dataset... For this dataset, we constructed training sets of 200 or 1200 spikes per unit with a test set of 200 spikes per unit. (Lines 223-227) |
| Hardware Specification | No | The paper mentions running experiments on 'large-scale, multi-gpu clusters' for the transformer model and a 'single GPU' for the MLP-based architecture, but does not specify exact GPU or CPU models, memory, or other detailed hardware specifications. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used for implementation. |
| Experiment Setup | Yes | The MLP encoder is a straightforward model that consists of three layers with sizes [768, 512, 256] and ReLU activations between them. (Lines 204-206) and For all baselines, we sweep across (3,5,7,9) principal components and 3-11 channel subset sizes. (Lines 263-264) |