EasyTPP: Towards Open Benchmarking Temporal Point Processes

Authors: Siqiao Xue, Xiaoming Shi, Zhixuan Chu, Yan Wang, Hongyan Hao, Fan Zhou, Caigao JIANG, Chen Pan, James Y. Zhang, Qingsong Wen, JUN ZHOU, Hongyuan Mei

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We comprehensively evaluate 9 models in our benchmark, which include the classical Multivariate Hawkes Process (MHP) with an exponential kernel, (see Appendix B for more details), and 8 widely-cited state-of-the-art neural models:
Researcher Affiliation Collaboration Siqiao Xue , Xiaoming Shi , Zhixuan Chu , Yan Wang , Hongyan Hao , Fan Zhou , Caigao Jiang , Chen Pan , James Y. Zhang , Qingsong Wen , Jun Zhou Ant Group, Alibaba Group siqiao.xsq@alibaba-inc.com Hongyuan Mei TTIC hongyuan@ttic.edu
Pseudocode Yes Listing 1: Pseudo implementation of customizing a TPP model in Py Torch using Easy TPP.
Open Source Code Yes The code and data are available at https: //github.com/ant-research/Easy Temporal Point Process.
Open Datasets Yes All preprocessed datasets are available at Google Drive.
Dataset Splits Yes Following common practices, we split the set of sequences into disjoint train, validation, and test set.
Hardware Specification Yes All the experiments were conducted on a server with 256G RAM, a 64 logical cores CPU (Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz) and one NVIDIA Tesla P100 GPU for acceleration.
Software Dependencies No Our library is compatible with both Py Torch (Paszke et al., 2019) and Tensor Flow (Abadi et al., 2016), the top-2 popular deep learning frameworks, and thus offers a great flexibility for future research in method development. (No specific version numbers for these frameworks are provided, only citations to their original papers).
Experiment Setup Yes We keep the model architectures as the original implementations in their papers. For a fair comparison, we use the same training procedure for all the models: we used Adam (Kingma & Ba, 2015) with the default parameters, biases initialized with zeros, no learning rate decay, the same maximum number of training epochs, and early stopping criterion (based on log-likelihood on the held-out dev set) for all models.