Interaction Point Processes via Infinite Branching Model
Authors: Peng Lin, Bang Zhang, Ting Guo, Yang Wang, Fang Chen
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiments on synthetic and real-world data demonstrate the superiority of the IBM. |
| Researcher Affiliation | Collaboration | Peng Lin , Bang Zhang , Ting Guo , Yang Wang , Fang Chen NICTA, Australian Technology Park, 13 Garden Street, Eveleigh NSW 2015, Australia School of Computer Science and Engineering, The University of New South Wales, Australia {peng.lin, bang.zhang, ting.guo, yang.wang, fang.chen}@nicta.com.au |
| Pseudocode | No | The IBM can be described as following for generating a sequence of points {ti}: 1. Sample immigrant intensity μ Exponential(λμ). 2. Sample t1 from PP(μ)... |
| Open Source Code | No | The paper does not provide any information about the availability of open-source code for the described methodology. |
| Open Datasets | No | In this section, we use the synthetic data generated from traditional Hawkes process to evaluate the IBM. In this experiment, we collected 922 failures from a metropolitan water supply network. |
| Dataset Splits | Yes | The simplified IBM with all points sharing the same offspring intensity is applied to the first 100 samples. ... The final model is used to ... measure the loglikelihood on the rest 30 samples. We use 4, 5, 6 and 7 years data for training and the obtained models are used to predict the amount of the failures in the coming year. |
| Hardware Specification | No | The paper does not provide specific hardware details for running its experiments. |
| Software Dependencies | No | The paper describes mathematical models and algorithms but does not specify any software names with version numbers. |
| Experiment Setup | Yes | Immigrant intensities are set to 0.8 for both kernels. For each kernel, 130 synthetic temporal samples are generated on time interval [0, 20]. We use 4, 5, 6 and 7 years data for training and the obtained models are used to predict the amount of the failures in the coming year. |