Reduced-Rank Linear Dynamical Systems
Authors: Qi She, Yuan Gao, Kai Xu, Rosa Chan
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Results on both simulated and experimental data demonstrate our model can robustly learn latent space from short-length, noisy, count-valued data and significantly improve the prediction performance over the state-of-the-art methods. |
| Researcher Affiliation | Collaboration | 1Princeton Neuroscience Institute, Princeton University 2Tencent AI Lab 3Department of Electronic Engineering, City University of Hong Kong 4Department of Computer Science, Princeton University 5School of Computer Science, National University of Defense Technology |
| Pseudocode | Yes | Algorithm 1 Framework of inference and learning (VBEM) |
| Open Source Code | Yes | We implement RRLDS in Matlab(2017a), and our code is available at https://github.com/sheqi/RRLDS |
| Open Datasets | Yes | We also evaluated our method on two experimental hippocampus datasets (Mizuseki et al. 2009). |
| Dataset Splits | Yes | β1,β2 are selected (in all experiments) by the internal cross validation while optimizing model s predictive performance. ... The length is 500 for training data and 100 for testing data. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments. |
| Software Dependencies | Yes | We implement RRLDS in Matlab(2017a) |
| Experiment Setup | Yes | β1,β2 are selected (in all experiments) by the internal cross validation while optimizing model s predictive performance. ... We select the step size to assure fast convergence rate based on Theorem 1 and proof is in the supplementary material. ... In practice, we initialize our parameters using Laplace-EM algorithm (Buesing et al. 2014), which empirically gives runtime advantages, and produces a sensible optimum. |