Enabling hyperparameter optimization in sequential autoencoders for spiking neural data

Authors: Mohammad Reza Keshtkaran, Chethan Pandarinath

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental When applied to data from motor cortex recorded while monkeys made reaches in various directions, large-scale HP optimization allowed SAEs to better maintain performance for small dataset sizes.
Researcher Affiliation Academia Mohammad Reza Keshtkaran Coulter Dept. of Biomedical Engineering Emory University and Georgia Tech Atlanta, GA 30322 mkeshtk@emory.edu; Chethan Pandarinath Coulter Dept. of Biomedical Engineering Dept of Neurosurgery Emory University and Georgia Tech Atlanta, GA 30322 chethan@gatech.edu
Pseudocode No The paper describes the LFADS architecture and its components but does not provide any structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions using a 'publicly-available LFADS codepack' but does not provide a link or explicit statement about making the code for their specific contributions (Sample Validation, Coordinated Dropout, HP optimization framework) publicly available.
Open Datasets Yes The second dataset we analyzed is publicly available (indy_20160426_01 [25]). [25] Joseph E. O Doherty, Mariana M. B. Cardoso, Joseph G. Makin, and Philip N. Sabes. Nonhuman Primate Reaching with Multichannel Sensorimotor Cortex Electrophysiology [Data set], May 2017. Zenodo. http://doi.org/10.5281/zenodo.583331.
Dataset Splits Yes In all cases, 80% of trials were used for model training, while 20% were held-out for validation.
Hardware Specification No The paper mentions running experiments on 'a local cluster' but does not provide specific details about the hardware specifications such as GPU or CPU models.
Software Dependencies No The paper mentions using a 'publicly-available LFADS codepack' but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Table 1: List of HPs searched with PBT: L2 Gen scale (5, 5e4) log-uniform, L2 Con scale (5, 5e4) log-uniform, KL IC scale (0.05, 5) log-uniform, KL CO scale (0.05, 5) log-uniform, Dropout (0, 0.7) uniform, Keep ratio (0.3, 0.99) 0.5, Learning rate (10 5, 0.02) 0.01.