Model-based targeted dimensionality reduction for neuronal population data
Authors: Mikio Aoi, Jonathan W. Pillow
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that our approach outperforms alternative methods in both mean squared error of the parameter estimates, and in identifying the correct dimensionality of encoding using simulated data. We also show that our method provides more accurate inference of lowdimensional subspaces of activity than a competing algorithm, demixed PCA. |
| Researcher Affiliation | Academia | Mikio C. Aoi Princeton Neuroscience Institute Princeton University, Princeton, NJ 08544 maoi@princeton.edu Jonathan W. Pillow Princeton Neuroscience Institute Princeton University, Princeton, NJ 08544 pillow@princeton.edu |
| Pseudocode | Yes | Algorithm 1 Estimation of dimensionality |
| Open Source Code | Yes | Demonstration code is available for download at the first author s website at http://www.mikioaoi.com/samplecode/RDRdemo.zip |
| Open Datasets | No | We applied our greedy algorithm on simulated data in order to determine if it could accurately recover the true ranks using n = 100 neurons and T = 15 time points. For each run of our simulations we first selected a random dimensionality between 1-6 for each of P = 3 task variables (two graded variables with values drawn from { 2, 1, 0, 1, 2} and one binary task variable with values { 1, 1}). Using these dimensionalities, the elements of Wp and Sp were drawn independently from a N(0, 1) distribution. |
| Dataset Splits | No | The paper describes using 'simulated data' for evaluation, but does not explicitly detail train/validation/test dataset splits or cross-validation for its own method. It mentions dPCA uses 'cross-validated regularization parameters' but not for the proposed method's validation process. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper describes algorithms and methods but does not provide specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9). |
| Experiment Setup | Yes | We applied our greedy algorithm on simulated data in order to determine if it could accurately recover the true ranks using n = 100 neurons and T = 15 time points. For each run of our simulations we first selected a random dimensionality between 1-6 for each of P = 3 task variables (two graded variables with values drawn from { 2, 1, 0, 1, 2} and one binary task variable with values { 1, 1}). Using these dimensionalities, the elements of Wp and Sp were drawn independently from a N(0, 1) distribution. To give us heterogeneous noise variances, the noise variance for each neuron was drawn from an exponential distribution with mean parameter σ2 = 50. The resulting average SNR for any one task variable was -0.26 ( 0.75, log10 units). We then simulated observations according to our model with varying numbers of trials (N {50, 200, 500, 1000, 1500, 2000}). In order to simulate incomplete observations, we set the probability of observing any given neuron on any given trial to πobs = .4. |