C-NTPP: Learning Cluster-Aware Neural Temporal Point Process
Authors: Fangyu Ding, Junchi Yan, Haiyang Wang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that c-NTPP achieves superior performance on both real-world and synthetic datasets, and it can also uncover the underlying clustering correlations. |
| Researcher Affiliation | Collaboration | Fangyu Ding1, Junchi Yan1 , Haiyang Wang2 1 Department of Computer Science and Engineering and MOE Key Lab of AI, Shanghai Jiao Tong University 2 Ant Group, Hangzhou, China {arthur 99, yanjunchi}@sjtu.edu.cn, chixi.why@mybank.cn |
| Pseudocode | Yes | Algorithm 1: The forward pass of c-NTPP. |
| Open Source Code | No | The paper does not provide a statement about releasing the source code for their proposed method or a link to a code repository. |
| Open Datasets | Yes | 4.1 Datasets and Protocols 1) SYN. A synthetic dataset simulated by the open-source library Tick (Bacry et al. 2017). ... 2) FIN. The financial dataset (Du et al. 2016) ... 3) MMC. MIMIC II (Saeed et al. 2002) ... 4) RET. The Retweets dataset (Zhao et al. 2015) ... 5) SO. The Stack Overflow dataset (Paranjape, Benson, and Leskovec 2017) |
| Dataset Splits | Yes | The pre-processing and the train-val-test split details of the datasets are consistent with previous works (Mei and Eisner 2017; Zuo et al. 2020), the validation datasets are used to find the optimal number of epochs to train on the training datasets, which alleviates the overfitting problem. |
| Hardware Specification | Yes | All the experiments run on a single RTX-2080Ti (11GB) GPU, we cut the long sequences to a maximum sequence length of 128 to avoid the GPU memory overflow. |
| Software Dependencies | No | The paper mentions 'Adam optimizer' but does not specify version numbers for any deep learning frameworks (e.g., PyTorch, TensorFlow) or other libraries used. |
| Experiment Setup | Yes | Batch size is set to 16 and SVAE sample number N is set to 1. The number of maximum clusters is pre-defined as 4. We set the dropout rate as 0.1. The random seed is fixed to 42 for reproducible results. We use Adam optimizer with a learning rate of 0.0001 for training. |