Scalable Bayesian GPFA with automatic relevance determination and discrete noise models
Authors: Kristopher Jensen, Ta-Chu Kao, Jasmine Stone, Guillaume Hennequin
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We apply b GPFA to continuous recordings spanning 30 minutes with over 14 million data points from primate motor and somatosensory cortices during a self-paced reaching task. We validate our method on synthetic and biological data, where b GPFA exhibits superior performance to GPFA and Poisson GPFA with increased scalability and without requiring cross-validation to select the latent dimensionality. We then apply b GPFA to longitudinal, multi-area recordings from primary motor (M1) and sensory (S1) areas during a monkey self-paced reaching task spanning 30 minutes. |
| Researcher Affiliation | Academia | Kristopher T. Jensen* Ta-Chu Kao* Jasmine T. Stone Guillaume Hennequin Department of Engineering University of Cambridge {ktj21, tck29, jts58, gjeh2}@cam.ac.uk |
| Pseudocode | Yes | The algorithm is described in pseudocode with further implementation and computational details in Appendix L. |
| Open Source Code | Yes | Here, we provide a ready-to-use Python package with GPU implementations of not only b GPFA with ARD, but also standard GPFA and Factor Analysis with Gaussian and non-Gaussian noise models. |
| Open Datasets | Yes | We applied b GPFA to biological data recorded from a rhesus macaque during a self-paced reaching task with continuous recordings spanning 30 minutes (37, 42; Figure 3a)." and "We are grateful to O Doherty et al. [42] for making their data publicly available and to Marine Schimel and David Liu for insightful discussions." (Reference 42: "O Doherty, J. E., Cardoso, M., Makin, J., and Sabes, P. (2017). Nonhuman primate reaching with multichannel sensorimotor cortex electrophysiology. Zenodo http://doi. org/10.5281/zenodo, 583331.") |
| Dataset Splits | No | The paper discusses 'cross-validation' and 'training data' but does not specify explicit percentages, sample counts, or refer to a named standard split for training, validation, and test datasets. |
| Hardware Specification | No | The paper does not specify the exact hardware components (e.g., CPU, GPU models, or memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions a 'Python package with GPU implementations' but does not provide specific version numbers for Python, any deep learning frameworks (e.g., PyTorch, TensorFlow), or CUDA libraries. |
| Experiment Setup | No | The paper describes the optimization process using stochastic gradient ascent with Adam and mini-batches but does not provide specific hyperparameter values such as learning rate, batch size, or number of epochs. |