Online Learning for Multivariate Hawkes Processes

Authors: Yingxiang Yang, Jalal Etesami, Niao He, Negar Kiyavash

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical results show that our algorithm offers a competing performance to that of the nonparametric batch learning algorithm, with a run time comparable to parametric online learning algorithms. We evaluate the performance of NPOLE-MHP on both synthetic and real data, from multiple aspects: (i) visual assessment of the goodness-of-fit comparing to the ground truth; (ii) the average L1 error defined as the average of Pp i=1 Pp j=1 fi,j bfi,j L1[0,z] over multiple trials; (iii) scalability over both dimension p and time horizon T. For benchmarks, we compare NPOLE-MHP s performance to that of online parametric algorithms (DMD, OGD of [15]) and nonparametric batch learning algorithms (MLE-SGLP, MLE of [27]).
Researcher Affiliation Academia University of Illinois at Urbana-Champaign Urbana, IL 61801 {yyang172,etesami2,niaohe,kiyavash} @illinois.edu
Pseudocode Yes Algorithm 1 Non Parametric On Line Estimation for MHP (NPOLE-MHP)
Open Source Code No The paper does not provide any concrete access information (e.g., specific repository link, explicit code release statement, or mention of code in supplementary materials) for the methodology described.
Open Datasets Yes We test the performance of NPOLE-MHP on the memetracker data [21]
Dataset Splits No The paper mentions 'training and test data' but does not provide specific details on how the dataset was split (e.g., explicit percentages, sample counts, or details on cross-validation setup).
Hardware Specification Yes The simulation of the DMD and OGD algorithms took 2 minutes combined on a Macintosh with two 6-core Intel Xeon processor at 2.4 GHz, while NPOLE-MHP took 3 minutes.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., programming language versions, library versions, or specific solver versions) needed to replicate the experiment.
Experiment Setup Yes In particular, we set the discretization level δ = 0.05, the window size z = 3, the step size ηk = (kδ/20+100) 1, and the regularization coefficient ζi,j ζ = 10 8. using a window size of 3 hours, an update interval δ = 0.2 seconds, and a step size ηk = 1/(kζ + 800) with ζ = 10 10 for NPOLE-MHP.