Deep Poisson gamma dynamical systems

Authors: Dandan Guo, Bo Chen, Hao Zhang, Mingyuan Zhou

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on both synthetic and a variety of real-world data demonstrate that the proposed model not only has excellent predictive performance, but also provides highly interpretable multilayer latent structure to represent hierarchical and temporal information propagation.
Researcher Affiliation Academia Dandan Guo, Bo Chen , Hao Zhang National Laboratory of Radar Signal Processing Collaborative Innovation Center of Information Sensing and Understanding Xidian University, Xi an, China gdd_xidian@126.com, bchen@mail.xidian.edu.cn, zhanghao_xidian@163.com Mingyuan Zhou Mc Combs School of Business The University of Texas at Austin Austin, TX 78712, USA mingyuan.zhou@mccombs.utexas.edu
Pseudocode Yes We describe the BUFD Gibbs sampling algorithm for DPGDS in Algorithm 1 and provide more details in the Appendix. ... We provide the details of the SGMCMC for DPGDS in Algorithm 2 in the Appendix.
Open Source Code No The paper does not provide an explicit statement or link for open-source code for the described methodology.
Open Datasets Yes Following the literature [1, 4], we consider sequences of different lengths... and generate 50 synthetic bouncing ball videos for training, and 30 ones for testing. ... The State-of-the-Union (SOTU) dataset... The Global Database of Events, Language, and Tone (GDELT) and Integrated Crisis Early Warning System (ICEWS)... The NIPS corpus... The DBLP corpus...
Dataset Splits Yes for each corpus, the entire data of the last year is held out, and for the documents in the previous years we randomly partition the words of each document into 80% / 20% in each trial, and we conduct five random trials to report the sample mean and standard deviation.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as CPU/GPU models or memory specifications.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x) for their implementation.
Experiment Setup Yes For DPGDS, we set τ0 = 1, γ0 = 100, η0 = 0.1 and ϵ0 = 0.1. We use [K(1), K(2), K(3)] = [200, 100, 50] for both DPGDS and DTSBN and K = 200 for PGDS, GP-DPFA, GPDM, and TSBN. For PGDS, GP-DPFA, GPDM, and DPGDS, we run 2000 Gibbs sampling as burn-in and collect 3000 samples for evaluation. We also use SGMCMC to infer DPGDS, with 5000 collection samples after 5000 burn-in steps, and use 10000 SGMCMC iterations for both TSBN and DTSBN to evaluate their performance.