Bubblewrap: Online tiling and real-time flow prediction on neural manifolds

Authors: Anne Draelos, Pranjal Gupta, Na Young Jun, Chaichontat Sriworarat, John Pearson

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrated the performance of Bubblewrap on both simulated non-linear dynamical systems and experimental neural data. We compared these results to two existing online learning models for neural data, both of which are based on dynamical systems [30, 32]. For each data set, we gave each model the same data as reduced by random projections and pro SVD. For comparisons across models, we quantified overall model performance by taking the mean log predictive probability over the last half of each data set (Table 1).
Researcher Affiliation Academia Anne Draelos Biostatistics & Bioinformatics Duke University anne.draelos@duke.edu Pranjal Gupta Psychology & Neuroscience Duke University pranjal.gupta@duke.edu Na Young Jun Neurobiology Duke University nayoung.jun@duke.edu Chaichontat Sriworarat Biomedical Engineering Duke University chaichontat.s@duke.edu John Pearson Biostatistics & Bioinformatics Electrical & Computer Engineering Neurobiology Psychology & Neuroscience Duke University john.pearson@duke.edu
Pseudocode Yes Algorithm 1 Procrustean SVD (pro SVD)
Open Source Code Yes Our implementation of Bubblewrap, as well as code to reproduce our experiments, is open-source and available online at http://github.com/pearsonlab/Bubblewrap.
Open Datasets Yes For experimental data, we used four publicly available datasets from a range of applications: 1) trial-based spiking data recorded from primary motor cortex in monkeys performing a reach task [48, 49] preprocessed by performing online j PCA [49]; 2) continuous video data and 3) trial-based wide-field calcium imaging from a rodent decision-making task [50, 51]; 4) high-throughput Neuropixels data [52, 53].
Dataset Splits No For comparisons across models, we quantified overall model performance by taking the mean log predictive probability over the last half of each data set (Table 1). The paper does not specify percentages or counts for training/validation/test splits, nor a detailed splitting methodology.
Hardware Specification No The paper mentions training on a GPU, but does not provide specific hardware details such as GPU models, CPU models, or memory specifications.
Software Dependencies No The paper mentions 'JAX [54]' but does not provide specific version numbers for JAX or any other software dependencies used in the experiments.
Experiment Setup Yes Given: Hyperparameters λj, j, βt, forgetting rate "t, step size δ, initial data buffer M 2: Initialize with {x1 . . . x M}: µj µ, j , aij 1 N . ...trained this model to maximize L(A, µ, ) using Adam [47], enforcing parameter constraints by replacing them with unconstrained variables aij and lower triangular Lj with positive diagonal.