UNIPoint: Universally Approximating Point Processes Intensities
Authors: Alexander Soen, Alexander Mathews, Daniel Grixti-Cheng, Lexing Xie9685-9694
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Evaluations on synthetic and real world datasets show that this simpler representation performs better than Hawkes process variants and more complex neural network-based approaches. |
| Researcher Affiliation | Academia | Alexander Soen, Alexander Mathews, Daniel Grixti-Cheng, Lexing Xie The Australian National University alexander.soen@anu.edu.au, alex.mathews@anu.edu.au, a500846@anu.edu.au, lexing.xie@anu.edu.au |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | Yes | Reference code is available online1. 1https://github.com/alexandersoen/unipoint |
| Open Datasets | Yes | MOOC2. A dataset of student interactions in online courses (Kumar, Zhang, and Leskovec 2019), previously used for evaluating neural point processes (Shchur, Biloˇs, and G unnemann 2020). 2https://github.com/srijankr/jodie/ |
| Dataset Splits | Yes | We fit models for all synthetic and real world datasets, with a 60 : 20 : 20 train-validation-test split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | Our models are implemented in Py Torch4. 4https://pytorch.org (Paszke et al. 2017) While PyTorch is mentioned, a specific version number is not provided. |
| Experiment Setup | Yes | All UNIPoint models tested employ an RNN with 48 hidden units, a batch size of 64, and are trained using Adam (Kingma and Ba 2014) with L2 weight decay set to 10 5. The validation set is used for early stopping: training halts if the validation loss does not improve by more than 10 4 for 100 successive minibatches. |