Probabilistic Attention-to-Influence Neural Models for Event Sequences
Authors: Xiao Shou, Debarun Bhattacharjya, Tian Gao, Dharmashankar Subramanian, Oktie Hassanzadeh, Kristin Bennett
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We motivate our general framework and show improved performance in experiments compared to existing baselines on synthetic data as well as realworld benchmarks, for tasks involving prediction and influencing set identification. |
| Researcher Affiliation | Collaboration | 1Rensselaer Polytechnic Institute, Troy, NY, USA 2IBM AI Research, Yorktown Heights, NY, USA. |
| Pseudocode | Yes | Algorithm 1 Topology-based event sequence generator (with Python pseudo code) |
| Open Source Code | No | The paper does not contain an explicit statement or link indicating that the authors have released open-source code for their proposed model. It mentions a GitHub link in the appendix related to a baseline (THP), but not their own code. |
| Open Datasets | Yes | Datasets. We consider 5 real event datasets in different domains curated previously (Bhattacharjya et al., 2022). ... Diabetes (Frank & Asuncion, 2010) ... Stack Overflow (Grant & Betts, 2013) ... Linked In (Xu et al., 2017) ... Beige Books ... Timelines ... We show an example of influencing set discovery by our model Uniform-τ on a dataset derived from a corpus of news article snippets from Event Registry (Leban et al., 2014). |
| Dataset Splits | Yes | Each dataset is randomly split into 70%-15%-15% train, dev, and test set. |
| Hardware Specification | Yes | All our experiments are performed on a private server (https://idea.rpi.edu/IDEA Cluster Access) with TITAN RTX GPU. |
| Software Dependencies | No | The paper mentions implementing components in PyTorch and using the Adam optimizer, but it does not provide specific version numbers for these software dependencies (e.g., PyTorch 1.x, Adam 2.x). |
| Experiment Setup | Yes | The experiment setting for hyperparameters in Uniform-2 and Sparse-2 for binary prediction is given in Table 7. The τ values for Uniform-τ are {0.4,0.5,0.6} and for Sparse-τ they are {0.1,0.2,0.3}. |