Proximal Graphical Event Models
Authors: Debarun Bhattacharjya, Dharmashankar Subramanian, Tian Gao
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present efficient heuristics for learning PGEMs from data, demonstrating their effectiveness on synthetic and real datasets. 4 Experiments Figure 2: Model comparisons with 10 synthetic event datasets generated from 6 PGEMs. Table 1: Log likelihood of models for experiments on the books dataset Table 2: Log likelihood of models for experiments on the ICEWS dataset |
| Researcher Affiliation | Industry | Debarun Bhattacharjya Dharmashankar Subramanian Tian Gao IBM Research Thomas J. Watson Research Center, Yorktown Heights, NY, USA {debarunb,dharmash,tgao}@us.ibm.com |
| Pseudocode | Yes | Algorithm 1 Change points in w across all of piece-wise linear functions D(y, z) Algorithm 2 Forward Backward Search |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | We consider two books from the SPMF data mining library [Fournier-Viger et al., 2014]: Leviathan, a book by Thomas Hobbes from the 1600s, and the Bible. We consider the Integrated Crisis Early Warning System (ICEWS) political relational event dataset [O Brien, 2010] |
| Dataset Splits | No | The paper uses synthetic datasets, parts of books (Leviathan, Bible), and time periods for ICEWS data, but it does not explicitly provide training, validation, or test dataset splits (e.g., percentages, sample counts, or predefined split methods). |
| Hardware Specification | No | The paper does not provide any specific hardware details such as CPU/GPU models, memory, or cloud instance types used for running the experiments. |
| Software Dependencies | No | The paper mentions using CPCIM (a baseline algorithm) and the SPMF data mining library, but it does not specify version numbers for any software components, programming languages, or libraries used in the experiments. |
| Experiment Setup | Yes | For CPCIM, we used the following hyper-parameters. The conjugate prior for conditional intensity has two parameters, the pseudo-count α and pseudo-duration β for each label... We ran experiments using α = Kρ, β = K, for various values of K = 10, 20, . . . , where higher values of K correspondingly increase the influence of the prior on the results. Experimental results presented in this section are for K = 20. The structural prior κ was fixed at 0.1 [Gunawardana et al., 2011]. Both PGEM learning algorithms use ϵ = 0.001 to search for left limiting points. Windows were chosen to range from between a fortnight to 2 months. For CPCIM, we used intervals of the form [t t , t) as basis functions, where t {25, 50, 100, 200, 300, 400, 500, 1000, 5000}. For CPCIM, we used intervals of the form [t t , t) as basis functions, where t {7, 15, 30, 45, 60, 75, 90, 180}. |