Deep Poisson Factor Modeling

Authors: Ricardo Henao, Zhe Gan, James Lu, Lawrence Carin

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present experiments on three corpora: 20 Newsgroups (20 News), Reuters corpus volume I (RCV1) and Wikipedia (Wiki).
Researcher Affiliation Academia Department of Electrical and Computer Engineering Duke University, Durham, NC 27708 {r.henao,zhe.gan,james.lu,lcarin}@duke.edu
Pseudocode No The paper describes the inference procedures verbally and with mathematical equations but does not include structured pseudocode or clearly labeled algorithm blocks.
Open Source Code No The code used, implemented in Matlab, will be made publicly available.
Open Datasets Yes We present experiments on three corpora: 20 Newsgroups (20 News), Reuters corpus volume I (RCV1) and Wikipedia (Wiki).
Dataset Splits No The paper specifies training and test sets for 20 News (11,315 training, 7,531 test) and describes how held-out data is used for perplexity calculation, but it does not explicitly state a distinct validation set split for hyperparameter tuning.
Hardware Specification No The paper discusses computational complexity and runtimes for the models but does not provide specific hardware details such as GPU or CPU models, or cloud computing specifications, used for running the experiments.
Software Dependencies No The paper states the code is 'implemented in Matlab' but does not provide specific version numbers for Matlab or any other ancillary software dependencies used for the experiments.
Experiment Setup Yes For our model, we run 3,000 samples (first 2,000 as burnin) for MCMC and 4,000 iterations with 200-document mini-batches for SVI. and In the experiments, we set κ = 0.7 and τ = 128. and In the experiments, we run 100 Gibbs sampling cycles per layer.