Attentive Neural Point Processes for Event Forecasting

Authors: Yulong Gu7592-7600

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on one synthetic and four real-world datasets demonstrate that ANPP can achieve significant performance gains against state-of-the-art methods for predictions of both timings and markers.
Researcher Affiliation Industry Yulong Gu Alibaba Group guyulongcs@gmail.com
Pseudocode No The paper describes the model architecture and equations but does not include any pseudocode or algorithm blocks.
Open Source Code Yes To facilitate future research, we release the codes and datasets at https://github.com/guyulongcs/AAAI2021 ANPP.
Open Datasets Yes In the experiments, we use one synthetic dataset Hawkes and four real-world datasets in medical, financial and e-commerce domains respectively as benchmarks, which are widely used in literature (Du et al. 2016; Mei and Eisner 2017; Kang and Mc Auley 2018). Table 1 shows the numbers of the marker types |M| and the total numbers of the events |E| in these datasets. In the experiments, for each dataset, we randomly select 70% sequences for training, 10% sequences for validation and the rest 20% sequences for testing.
Dataset Splits Yes In the experiments, for each dataset, we randomly select 70% sequences for training, 10% sequences for validation and the rest 20% sequences for testing.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models) used for running the experiments.
Software Dependencies No The paper mentions 'Tensorflow' but does not provide specific version numbers for it or any other software dependencies.
Experiment Setup Yes The Adam optimizer is utilized for training. In the experiments... The learning rate, batch size, hidden vector dimension, dropout rate are set to 0.001, 64, 64 and 0.5 respectively... The number of blocks and heads are set to 2 and 4 using the validation set.