The Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point Process

Authors: Hongyuan Mei, Jason M. Eisner

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We fit our various models on several simulated and real-world datasets, and evaluated them in each case by the log-probability that they assigned to held-out data. We also compared our approach with that of Du et al. (2016) on their prediction task.
Researcher Affiliation Academia Hongyuan Mei Jason Eisner Department of Computer Science, Johns Hopkins University 3400 N. Charles Street, Baltimore, MD 21218 U.S.A {hmei,jason}@cs.jhu.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. Algorithm descriptions are provided in prose.
Open Source Code Yes Our code and data are available at https://github.com/HMEIatJHU/neurawkes.
Open Datasets Yes Retweets Dataset (Zhao et al., 2015). Meme Track Dataset (Leskovec and Krevl, 2014). The electrical medical records (MIMIC-II) dataset is a collection of de-identified clinical visit records of Intensive Care Unit patients for 7 years. These datasets are cited or are standard benchmarks, indicating public availability.
Dataset Splits Yes We divide our data into training, validation, and test sets. We use the validation set to select the optimal model and hyperparameters, and report the log-likelihood on the held-out test set.
Hardware Specification Yes the NVIDIA Corporation kindly donated two Titan X Pascal GPUs.
Software Dependencies No The paper mentions using TensorFlow and Adam for optimization but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes For all models, we initialize parameters uniformly from [−0.01, 0.01] and clip gradients to 1. We use the Adam (Kingma and Ba, 2015) optimization algorithm with a learning rate of 0.001.