Differentiable Algorithm for Marginalising Changepoints
Authors: Hyoungjin Lim, Gwonsoo Che, Wonyeol Lee, Hongseok Yang4828-4835
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically show the effectiveness of our algorithm in this application by tackling the posterior inference problem on synthetic and real-world data. |
| Researcher Affiliation | Academia | Hyoungjin Lim, Gwonsoo Che, Wonyeol Lee, Hongseok Yang School of Computing KAIST, South Korea {lmkmkr, gche, wonyeol, hongseok.yang}@kaist.ac.kr |
| Pseudocode | Yes | Algorithm 1 Algorithm for marginalising changepoints. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing its source code or a direct link to a code repository for the methodology described. |
| Open Datasets | Yes | For the real-world application, we used well-log data (Fearnhead 2006). Reference: Fearnhead, P. 2006. Exact and efficient bayesian inference for multiple changepoint problems. Statistics and Computing 16(2):203 213. |
| Dataset Splits | No | The paper describes using synthetic and real-world data but does not specify explicit train/validation/test dataset splits with percentages, sample counts, or citations to predefined splits. |
| Hardware Specification | Yes | The experiments were performed on a Ubuntu 16.04 machine with Intel i7-7700 CPU with 16GB of memory. |
| Software Dependencies | No | The paper mentions using 'Py Stan' and 'Anglican' but does not specify their version numbers. |
| Experiment Setup | Yes | For HMCnaive and HMCours, we used the No-U-Turn Sampler (NUTS)... with default hyper-parameters, except for adapt delta = 0.95. For IPMCMC and LMH, we used the implementations in Anglican... with default hyper-parameters, except for the following IPMCMC setup: number-of-nodes = 8 for both the synthetic and well-log data, and pool = 8 for the well-log data. For each chain of HMCours, we generated 30K samples with random initialisation (when possible) after burning in 1K samples. |