Wasserstein Learning of Deep Generative Point Process Models
Authors: Shuai Xiao, Mehrdad Farajtabar, Xiaojing Ye, Junchi Yan, Le Song, Hongyuan Zha
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on various synthetic and real-world data substantiate the superiority of the proposed point process model over conventional ones.3 Experiments The current work aims at exploring the feasibility of modeling point process without prior knowledge of its underlying generating mechanism. |
| Researcher Affiliation | Collaboration | Shanghai Jiao Tong University College of Computing, Georgia Institute of Technology School of Mathematics, Georgia State University IBM Research China Ant Financial |
| Pseudocode | Yes | Algorithm 1 WGANTPP for Temporal Point Process. |
| Open Source Code | Yes | The Adam optimization method with learning rate 1e-4, β1 = 0.5, β2 = 0.9, is applied. The code is available 2. 2https://github.com/xiaoshuai09/Wasserstein-Learning-For-Point-Process |
| Open Datasets | Yes | Real datasets. We collect sequences separately from four public available datasets, namely, healthcare MIMIC-III, public media Meme Tracker, NYSE stock exchanges, and publications citations. |
| Dataset Splits | No | The paper describes the total size of the datasets (e.g., '20,000 sequences', '2246 patients') but does not specify how these datasets are partitioned into training, validation, and test sets with explicit percentages, counts, or a detailed splitting methodology. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running experiments. |
| Software Dependencies | No | The paper mentions 'Adam optimization method' but does not specify version numbers for any software, libraries, or frameworks used in the implementation or experimentation. |
| Experiment Setup | Yes | The default values α = 1e 4, β1 = 0.5, β2 = 0.9, m = 256, ncritic = 5. The Adam optimization method with learning rate 1e-4, β1 = 0.5, β2 = 0.9, is applied. |