Tweedie-Hawkes Processes: Interpreting the Phenomena of Outbreaks

Authors: Tianbo Li, Yiping Ke4699-4706

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluations on real-world datasets show that THP outperforms the rival state-of-the-art baselines in the task of forecasting future events. In this section, we demonstrate applications and evaluations on both synthetic and real-world datasets. Experimental results also show that THP outperforms the state-of-the-art baselines in data fitting and event prediction.
Researcher Affiliation Academia Tianbo Li, Yiping Ke School of Computer Science and Engineering Nanyang Technological University, Singapore tianbo001@e.ntu.edu.sg, ypke@ntu.edu.sg
Pseudocode No The paper describes the variational EM algorithm textually but does not include a formal pseudocode block or algorithm listing.
Open Source Code No The paper does not provide any statement or link regarding the public release of its source code.
Open Datasets Yes MERS-Co V (MC): the dataset we use in Task 1. The data is collected from the WHO website, where we study the reported cases in Saudi Arabia in the first 200 days of 2017. Meme Tracker (MT) (Leskovec, Backstrom, and Kleinberg 2009): the dataset in the Task 2. IPTV (Luo et al. 2014): the dataset consists of IPTV viewing events... Weeplace (Liu et al. ): This dataset contains the checkin histories of users...
Dataset Splits No The paper specifies training and testing splits (e.g., '60% as training dataset, and 40% as testing dataset' or 'two halves. The first half is for training and the second half for testing.') but does not explicitly mention a separate 'validation' set or split.
Hardware Specification No The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions adopting the Broyden-Fletcher-Goldfarb-Shanno algorithm but does not list any specific software or library versions (e.g., Python 3.x, PyTorch 1.x) used for implementation.
Experiment Setup No The paper does not provide specific details about the experimental setup such as hyperparameters (learning rate, batch size, number of epochs) or other system-level training settings.