Self-Adaptable Point Processes with Nonparametric Time Decays
Authors: Zhimeng Pan, Zheng Wang, Jeff M Phillips, Shandian Zhe
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | For evaluation, we first examined SPRITE in an ablation study. We tested the case of a single event type with excitation effects only and three time decays, and a bi-type event case with mixed excitation and inhibition effects. In both cases, our method accurately recovered the rate function of each type of events and the influence between the events, performing better than state-of-the-art methods based on or extending Hawkes processes and RNNs. Next, we examined SPRITE in three real-world and one synthetic benchmark datasets. We examined the accuracy in predicting the occurrence time and type of future events. In both tasks, our method nearly always outperforms the competing approaches, often by a large margin. |
| Researcher Affiliation | Academia | School of Computing, University of Utah Salt Lake City, UT 84112 {z.pan,wzhut}@utah.edu, jeffp@cs.utah.edu, zhe@cs.utah.edu |
| Pseudocode | No | The paper describes its algorithm and model estimation process through mathematical formulations and textual descriptions, but it does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper refers to open-source implementations for baseline methods (Neural HP, SAHP, RMTPP) but does not provide a statement or link for the open-sourcing of SPRITE, the method proposed in this paper. |
| Open Datasets | Yes | We downloaded the preprocessed datasets from https://drive.google.com/drive/folders/0BwqmV0EcoUc8UklIR1BKV25YR1U. |
| Dataset Splits | Yes | For each case, we generated 10K sequences for training and 1K for validation. The length of each sequence is 32. ... Again, we sampled 10K sequences for training and 1K for validation. Each sequence includes 64 events. ... For Retweets, MIMIC and SO, we randomly split the dataset into 70% for training, 10% for validation, 20% for testing. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., CPU, GPU models, memory). |
| Software Dependencies | No | We implemented SPRITE and HP with Tensor Flow (Abadi et al., 2016). ... We used the original implementation of Neural HP (https://github.com/HMEIatJHU/neurawkes) and SAHP (https://github.com/QiangAIResearcher/sahp_repo), and a high-quality open-source implementation of RMTPP (https://github.com/woshiyyya/ERPP-RMTPP). |
| Experiment Setup | Yes | For SPRITE and HP, we set the minibatch size of the event sequences to 16 and learning rate 10 3. We set the dimension of the embeddings to 4 for SPRITE. The nonlinear feature mappings φg and φη in (8) were both chosen as a single-layer feed-forward NN, with 16 neurons and leaky RELU as the activation function. We used the default settings of all the other methods. We ran each method with 50 epochs (enough for convergence). To avoid an unfair comparison caused by overfitting (especially for RNN based methods), we evaluated the likelihood of a validation dataset after each epoch and stopped training if there is no improvement (early stopping). |