Variational Inference for Sparse Gaussian Process Modulated Hawkes Process
Authors: Rui Zhang, Christian Walder, Marian-Andrei Rizoiu6803-6810
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We exploit synthetic data and two large social media datasets to evaluate our method. We show that our approach outperforms state-of-the-art non-parametric frequentist and Bayesian methods. We validate the efficiency of our accelerated variational inference schema and practical utility of our tighter ELBO for model selection. We observe that the tighter ELBO exceeds the common one in model selection. |
| Researcher Affiliation | Academia | Rui Zhang,1,2 Christian Walder,1,2 Marian-Andrei Rizoiu3 1The Australian National University, 2Data61 CSIRO, 3University of Technology Sydney Rui.Zhang@anu.edu.au, Christian.Walder@data61.csiro.au, Marian-Andrei.Rizoiu@uts.edu.au |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states that the code for some baselines is publicly available (Bacry et al. 2017) but does not provide any statement or link for the open-source code of their own proposed methodology. |
| Open Datasets | Yes | Our synthetic data are generated from three Hawkes processes over T = [0, π], whose triggering kernels are sin, cos and exp functions respectively, shown as below, and whose background intensities are the same μ = 10... Real World Data. We conclude our experiments with two large scale tweet datasets. ACTIVE (Rizoiu et al. 2018) is a tweet dataset... SEISMIC (Zhao et al. 2015) is a large scale tweet dataset... |
| Dataset Splits | Yes | To evaluate VBHP on synthetic data, 20 sequences are drawn from each model and 100 pairs of train and test sequences drawn from each sample to compute the HLL. ... Similar agreement is also observed between the TELBO and the HLL (Fig.3a, 3b). This demonstrates the practical utility of both the marginal likelihood itself and our approximation of it. |
| Hardware Specification | Yes | The CPU we use is Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz |
| Software Dependencies | No | The paper only states 'the language is Python 3.6.5.' without listing any specific version numbers for libraries, frameworks, or other key software components used in the experiments, which is insufficient for reproducibility according to the criteria. |
| Experiment Setup | Yes | Table 2 shows evaluations for baselines and VBHP (using 10 inducing points for trade-off between accuracy and time, so does Gibbs Hawkes)... We scale all original data to T = [0, π]... and employ 10 inducing points to balance time and accuracy. The model selection is performed by maximizing the approximate marginal likelihood. |