Exploiting Representation Curvature for Boundary Detection in Time Series

Authors: Yooju Shin, Jaehyun Park, Susik Yoon, Hwanjun Song, Byung Suk Lee, Jae-Gil Lee

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments using diverse real-world time-series datasets confirm the superiority of RECURVE over state-of-the-art methods.
Researcher Affiliation Academia 1KAIST, 2Korea University, 3University of Vermont
Pseudocode No The paper provides mathematical definitions (Definition 3.1, 3.2, 3.3) for concepts like TRAJECTORY and CURVATURE, but it does not include a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes RECURVE is implemented using Py Torch 1.13.0, and its source code is available at https://github.com/kaist-dmlab/RECURVE.
Open Datasets Yes The profiles of the four datasets used in our experiments are summarized in Table 1... WISDM [48], HAPT [49], and m Health [49] are human action recognition datasets... 50salads [50] is a video dataset that captures 25 people preparing salads; the I3D features of 2048 dimensions are extracted, following Farha and Gall [3].
Dataset Splits No The paper mentions the concept of using a validation dataset for thresholding: 'Alternatively, if we have a validation dataset, we select the threshold φ that yields the best performance based on an evaluation measure.' However, it does not specify the actual train/validation splits or percentages used in their experiments for the representation learning models or overall evaluation.
Hardware Specification Yes We use Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz and NVIDIA RTX 3090 for every experiment.
Software Dependencies Yes RECURVE is implemented using Py Torch 1.13.0, and its source code is available at https://github.com/kaist-dmlab/RECURVE.
Experiment Setup Yes The window size, 2I, and the number of training epochs for each dataset are shown in Table 1... The learning rate is set to 0.005 for all datasets. The hyperparameter w, indicating the length of a representation vector, is set to 5% of the mean segment length... The moving average in Eq. (6) is computed using the ten timestamps preceding and following each timestamp. ... For Ru LSIF, we conduct a grid search for the learning rate (LR) = {0.05, 0.1, 0.2}, the weight of L2 normalization λL2 = {0.01, 0.05, 0.1}, and the parameter of the Ru LSIF loss α = {0.01, 0.05, 0.1}... train it with a batch size of 32 for 50 epochs. For KL-CPD, we conduct a grid search to determine the optimal hidden dimensionality h = {10, 50, 100}... batch size is set to 64, the number of epochs is set to 3, and the learning rate is set to 0.001.