Learning Conditional Generative Models for Temporal Point Processes

Authors: Shuai Xiao, Hongteng Xu, Junchi Yan, Mehrdad Farajtabar, Xiaokang Yang, Le Song, Hongyuan Zha

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our model shows promising performance on various synthetic data and real-world event sequences in different domains. Experiments Our experiment involves various public real-world datasets, and synthetic dataset simulated by popular point process models.
Researcher Affiliation Collaboration Shanghai Jiao Tong University College of Computing, Georgia Institute of Technology Duke University, IBM Research China
Pseudocode Yes Algorithm 1 Conditional Wasserstein estimator (CWE) with MLE loss for event sequence prediction. The default values α = 1e 4, β1 = 0.5, β2 = 0.9, m = 256, ncritic = 5. Require: the trade-off coefficient σ, the regularization coefficient λ for direct Lipschitz constraint. the batch size, m. the number of iterations of the critic per generator iteration, ncritic. Adam hyper-parameters α, β1, β2. Require: w0, initial CNN discriminator fw s parameters. θ0, initial seq2seq LSTM generator gθ s parameters. 1: while θ has not converged do 2: for t = 0, ..., ncritic do 3: Sample {ζ(i), ρ(i)}m i=1 Pr from real data. 4: Generate {ζ, gθ(ζ)}. 5: ˆx = z{ζ, ρ} + (1 z){ζ, gθ(ζ)} where z Uniform(0, 1). 6: Lw L l=1 (fw({ζl, ρl}) fw({ζl, gθ(ζl)})) λ|f (ˆx) 1|. 7: w Adam( w Lw, w, α, β1, β2) 8: end for 9: Sample {ζ(i), ρ(i)}m i=1 Pr from real data. 10: Lθ = m i=1 fw({ζl,gθ(ζl)}) m σ log Pθ(ρ(i)|ζ(i)) 11: θ Adam( θLθ, θ, α, β1, β2) 12: end while
Open Source Code No The paper provides links to datasets (e.g., GitHub links for LinkedIn, IPTV, and NYSE data) but does not provide concrete access to the source code for the methodology described in the paper.
Open Datasets Yes The dataset was downloaded from https://mimic.physionet.org. The data can be found at https://github.com/Hongteng Xu/Hawkes Process-Toolkit/blob/master/Data/Linkedin Data.mat The data can be found at https://github.com/Hongteng Xu/Hawkes Process-Toolkit/blob/master/Data/IPTVData.mat. The data can be found at https://github.com/dunan/Neural Point Process/tree/master /data/real/book order.
Dataset Splits Yes The data for each type is divided into train, validation and test parts according to 0.7, 0.1, 0.2 ratio.
Hardware Specification Yes The implementation is based on Tensor Flow and all experiments are executed on 12 Nvidia Tesla K80 GPUs.
Software Dependencies No The paper mentions 'Tensor Flow' as the implementation basis but does not provide specific version numbers for TensorFlow or any other software dependencies.
Experiment Setup Yes The default values α = 1e 4, β1 = 0.5, β2 = 0.9, m = 256, ncritic = 5. Require: the trade-off coefficient σ, the regularization coefficient λ for direct Lipschitz constraint. the batch size, m. the number of iterations of the critic per generator iteration, ncritic. Adam hyper-parameters α, β1, β2.