Deep Dynamic Poisson Factorization Model

Authors: Chengyue Gong, win-bin huang

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Synthetic datasets and real-world datasets are applied to the proposed model and our results show good predicting and fitting performance with interpretable latent structure.
Researcher Affiliation Academia Chengyue Gong Department of Information Management Peking University cygong@pku.edu.cn Win-bin Huang Department of Information Management Peking University huangwb@pku.edu.cn
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code No No explicit statement regarding the release of open-source code for the described methodology, nor any links to a code repository, were found.
Open Datasets Yes Integrated Crisis Early Warning System (ICEWS): ICEWS is an international relations event data set extracted from news corpora used in [2]. (...) NIPS corpus (NIPS): The dataset is downloaded from Gal s page 1, with T = 17, V = 14036, with 3280697 events for the matrix. 1http://ai.stanford.edu/gal/data.html (...) Ebola corpus (EBOLA)2 : 2https://github.com/cmrivers/ebola/blob/master/country_timeseries.csv (...) International Disaster(ID)3 : 3http://www.emdat.be/ (...) Annual Sheep Population(ASP)4 : 4https://datamarket.com/data/list/?q=provider:tsdl
Dataset Splits No The paper mentions 'Data in the last time step is exploited as the predicting target in a prediction task', indicating a test set, but does not provide specific train/validation splits or percentages for reproducing data partitioning, nor does it explicitly mention a validation set.
Hardware Specification No No specific hardware details (like GPU/CPU models, memory amounts, or detailed computer specifications) used for running experiments are mentioned in the paper.
Software Dependencies No The paper does not provide specific software dependency details, such as library or solver names with version numbers, needed to replicate the experiment.
Experiment Setup Yes All hyperparameters of PGDS set in [2] are used in this paper. 1000 times gibbs sampling iterations for PGDS is performed, 100 iterations used mean-field VI for PFA is performed, and 400 epochs is executed for LSTM. The parameters in the proposed DDPFA model are set as follows:α(λ,φ,ψ) = 1, β(λ,φ,ψ) = 2, α(θ,h) = 1, β(θ,h) = 1. The iterations is set to 100. The stochastic gradient descent for the neural networks is executed 10 epochs in each iteration. The size of the window is 4. Hyperparameters of PFA are set as the same to our model. The number of factor is set to K = 3, and the number of the layers is 2. Both fitting and predicting tasks are performed in each model. The hidden layer of LSTM is 4 and the size in each layer is 20. We set K = 3 for ID and ASP datasets, while set K = 10 for the others. The size of the hidden layers of the LSTM is 40. The settings of remainder parameters here are the same as those in the above experiment.