Kernel Change-point Detection with Auxiliary Deep Generative Models
Authors: Wei-Cheng Chang, Chun-Liang Li, Yiming Yang, Barnabás Póczos
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The proposed approach significantly outperformed other state-of-the-art methods in our comparative evaluation of benchmark datasets and simulation studies.In Section 5, we conduct extensive benchmark evaluation showing the outstanding performance of KL-CPD in real-world CPD applications. With simulation-based analysis in Section 6, in addition, we can see the proposed method not only boosts the kernel power but also evades the performance degradation as data dimensionality of time series increases. |
| Researcher Affiliation | Academia | Wei-Cheng Chang, Chun-Liang Li, Yiming Yang & Barnabás Póczos Carnegie Mellon University Pittsburgh, PA 15213, USA {wchang2,chunlial,yiming,bapoczos}@cs.cmu.edu |
| Pseudocode | Yes | Algorithm 1: KL-CPD, our proposed algorithm. |
| Open Source Code | Yes | Finally, our experiment code and datasets are available at https://github.com/ October Chang/klcpd_code. |
| Open Datasets | Yes | Detailed descriptions are available in Appendix B.1. Bee-Dance2 records the pixel locations in x and y dimensions and angle differences of bee movements. Ethologists are interested in the three-stages bee waggle dance and aim at identifying the change point from one stage to another, where different stages serve as the communication with other honey bees about the location of pollen and water. 2http://www.cc.gatech.edu/~borg/ijcv_psslds/ |
| Dataset Splits | Yes | Following Lai et al. (2018); Saatçi et al. (2010); Liu et al. (2013), the datasets are split into the training set (60%), validation set (20%) and test set (20%) in chronological order. |
| Hardware Specification | Yes | Our algorithms are implemented in Python (Py Torch Paszke et al. (2017)), and running on Nvidia Ge Force GTX 1080 Ti GPUs. |
| Software Dependencies | No | The paper mentions 'Python (Py Torch Paszke et al. (2017))' and 'MATLAB code' for baselines, but it does not specify exact version numbers for PyTorch, Python, or MATLAB. |
| Experiment Setup | Yes | For hyper-parameter tuning in ARMA, the time lag p, q are chosen from {1, 2, 3, 4, 5}. For ARGP and ARGP-BOCPD the time lag order p is set to the same as ARMA and the hyperparameter of kernel is learned by maximizing the marginalized likelihood. For RDR-KCPD, the window size w are chosen from {25, 50}, sub-dim k = 5, α = {0.01, 0.1, 1}. For Mstats-KCPD and KL-CPD, the window size w = 25, and we use RBF kernel with median heuristic setting the kernel bandwidth. The hidden dimension of GRU is dh = 10 for MMD-codespace, MMD-negsample and KL-CPD. For KL-CPD, λ is chosen from {0.1, 1, 10} and β is chosen from {10 3, 10 1, 1, 10}. |