Multivariate Hawkes Processes for Large-Scale Inference
Authors: Rmi Lemonnier, Kevin Scaman, Argyris Kalogeratos
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments Table 1: Experiments on Meme Tracker subsets. AUC (%) and Accuracy (%) are reported for predicting the next event to happen, using SLRHP, MEMIP, and NAIVE approach. |
| Researcher Affiliation | Collaboration | R emi Lemonnier,1,2 Kevin Scaman,1,3 Argyris Kalogeratos1 1 CMLA ENS Paris-Saclay, CNRS, Universit e Paris-Saclay, France 2 Numberly, 1000Mercis group, Paris, France 3 Microsoft Research Inria Joint Center, Palaiseau, France |
| Pseudocode | Yes | Algorithm 1 SLRHP Inference: high-level description Input: H, K, γ, δ, P, α Output: P, α 1: Compute D and B 2: for i = 1 to num iters do 3: α = arg maxα L(P, H; α) 4: s.t. μK i 0 and g K ji 0, i, j = 1, ..., r 5: P = arg max P L(P, H; α) 6: end for 7: return P, α |
| Open Source Code | No | Lemonnier, R.; Scaman, K.; and Kalogeratos, A. 2017. Multivariate Hawkes processes for large-scale inference Supplementary material. Available at: http://kalogeratos.com/psite/files/My Papers/SLRHPappendix.pdf. The paper provides a link to supplementary material in PDF format, but does not explicitly state that the source code for the methodology is available or provide a link to a code repository. |
| Open Datasets | Yes | Our final set of experiments are conducted on the Meme Tracker (Leskovec and Krevl 2014) dataset. Meme Tracker is a benchmark corpus of 9.6 106 blog posts published between August 2008 and April 2009. We use posts from the period August 2008 to December 2008 as training set... Leskovec, J., and Krevl, A. 2014. SNAP Datasets: Stanford large network dataset collection. Available at: http://snap.stanford.edu/data. |
| Dataset Splits | No | We use posts from the period August 2008 to December 2008 as training set, and evaluate our models on the four remaining months. The paper specifies training and evaluation periods, but does not explicitly mention a validation dataset split. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models, memory, or cloud instance types used for the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers, such as Python or PyTorch versions, or specific solver versions. |
| Experiment Setup | Yes | for SLRHP, K = 6 and r = 2, except for MT3 for which r = 3 and For h = 1, ..., H, let Hh = (th m, uh m)m nh be the observed i.i.d. realizations sampled from the Hawkes process, and H = (Hh)h H the recorded history of events of all realizations. For each realization h, we denote as [T h , T h +] the observation period, and uh m and th m are respectively the event type and time of occurrence of the m-th event. The log-likelihood of the observations is: (...) Each natural occurrence rate and kernel function are approximated by a sum of K exponential triggering functions with γ, δ > 0 fixed hyperparameter values. and we choose ϵ > 0 and solve: α = arg max α ln chm α + ϵb(α) b α. |