Self-Bounded Prediction Suffix Tree via Approximate String Matching

Authors: Dongwoo Kim, Christian Walder

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through experiments on synthetic datasets as well as three real-world datasets, we show that the approximate matching PST results in better predictive performance than the other variants of PST. In Section 5 and 6, we verify our approach on synthetic datasets and demonstrate the improved predictive performance of our model on three real-world datasets.
Researcher Affiliation Collaboration 1Australian National University, Canberra, ACT, Australia 2Data to Decisions CRC, Kent Town, SA, Australia 3Data61 at CSIRO, Canberra, ACT, Australia.
Pseudocode Yes Algorithm 1 Online learning algorithm for unbounded a PST. and Algorithm 2 Online learning algorithm for self-bounded a PST.
Open Source Code No The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We use three datasets: a symbolic music dataset (Walder, 2016) from which we retain midi onset events only, a system call dataset (Hofmeyr et al., 1998), and human activity dataset (Ord onez et al., 2013).
Dataset Splits Yes For every experiment, we use the first 40% of a sequence to train, the subsequent 20% of the sequence to validate, and the final 40% of sequence to test the models.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments.
Software Dependencies No The paper does not provide any specific software dependencies with version numbers.
Experiment Setup Yes For both parameter λ and ϵ, we test all possible configuration of λ = {2, 4, 6, 8, 10, 12}, ξ = (0.5, 0.7, 0.9, 0.99), and ϵ = {0, 1} and choose the best model based on the accuracy of validation set.