Learning a Warping Distance from Unlabeled Time Series Using Sequence Autoencoders

Authors: Abubakar Abid, James Y. Zou

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In systematic experiments across different domains, we show that Autowarp often outperforms hand-crafted trajectory similarity metrics.
Researcher Affiliation Academia Abubakar Abid Stanford University a12d@stanford.eduJames Zou Stanford University jamesz@stanford.edu
Pseudocode Yes The complete algorithm for batched Autowarp is shown in Algorithm 1 in Appendix B.
Open Source Code No The paper does not provide any statement or link indicating that its source code is publicly available.
Open Datasets Yes Data can be downloaded from https://crawdad.org/epfl/mobility/20090224/cab/. Data can be downloaded from http://kdd.ics.uci.edu/databases/auslan/auslan.data.html.
Dataset Splits No The paper mentions using synthetic trajectories and subsets of real datasets but does not provide specific percentages or counts for training, validation, or test splits, nor does it refer to standard predefined splits.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments.
Software Dependencies No The paper does not list any specific software dependencies with version numbers (e.g., programming language versions, library versions, or solver versions).
Experiment Setup Yes We used Autowarp (Algorithm 1 with hyperparameters dh = 10, S = 64, p = 1/5) to learn a warping distance from the data (α = 0.88, γ = 0, ϵ = 0.33). We used Autowarp (Algorithm 1 with hyperparameters dh = 20, S = 32, p = 1/5) to learn a warping distance from the data (learned distance: α = 0.29, γ = 0.22, ϵ = 0.48).