Switching Poisson Gamma Dynamical Systems
Authors: Wenchao Chen, Bo Chen, Yicheng Liu, Qianru Zhao, Mingyuan Zhou
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on both unsupervised and supervised tasks demonstrate that the proposed model not only has excellent fitting and prediction performance on complex dynamic sequences, but also separates different dynamical patterns within them. |
| Researcher Affiliation | Academia | 1 National Laboratory of Radar Signal Processing, Xidian University, Xian, China 2 Mc Combs School of Business, The University of Texas at Austin, Austin, TX 78712, USA |
| Pseudocode | Yes | Algorithm 1 Hybrid stochastic-gradient MCMC and autoencoding variational inference for SPGDS |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | Global Database of Events, Language, and Tone (GDELT): GDELT is an international relationship dataset, which is extracted from news corpora. Integrated Crisis Early Warning System (ICEWS): ICEWS is another international relationship dataset extracted from news corpora. State-of-the-Union transcript (SOTU): The SOTU dataset contains the text of the annual SOTU speech transcripts from 1790 to 2014. DBLP conference abstract (DBLP): DBLP corpus is a database of computer research papers. NIPS corpus (NIPS): The NIPS contains the text of every NIPS conference paper from 1987 to 2003. ... sequential MNIST dataset and permuted sequential MNIST dataset. For sequential MNIST, the pixels of MNIST digits [Le Cun et al., 1998] are presented sequentially to the network and classification is performed at the end. |
| Dataset Splits | No | We employ the setup in [Zhe et al., 2015] that the entire data of the last year is held-out, while the words of the each document for the documents in the previous years are randomly partitioned into 80%/20% split. The 80% portion is used to train the model, and the prediction at the next year is tested on the rest of 20% held-out words." This describes a training and test split, but a separate validation split with specific percentages is not provided. It mentions 'cross validation' for hyperparameter tuning but not the specific split. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware used for running the experiments (e.g., GPU/CPU models, memory specifications). |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used in the implementation or experiments. |
| Experiment Setup | Yes | We set the number of latent states as K = 2, and compare the proposed SPGDS with PGDS, HMM and LDS on Mean Square Error (MSE) between the ground truth and the estimated value and Prediction Mean Square Error (PMSE), which is the MSE between the ground truth and the prediction in the next time-step. ... M is set as 50 here. ... We select Cg = 5, Ci = 5, Cs = 3, Cd = 2, Cn = 3 for datasets from left to right in Tab. 3. ... The latent dimension of models are 100. |