Closed-Form Diffeomorphic Transformations for Time Series Alignment
Authors: Iñigo Martinez, Elisabeth Viles, Igor G. Olaizola
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on several datasets to validate the generalization ability of our model to unseen data for time-series joint alignment. Results show significant improvements both in terms of efficiency and accuracy. |
| Researcher Affiliation | Academia | 1Vicomtech Foundation, Basque Research and Technology Alliance (BRTA), San Sebastian, Spain 2TECNUN School of Engineering, University of Navarra, San Sebastian, Spain 3Institute of Data Science and Artificial Intelligence, University of Navarra, Pamplona, Spain. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | We present Diffeomorphic Fast Warping (DIFW), an open-source library and highly optimized implementation... DIFW3 (see Supplementary Material). ... https://github.com/imartinezl/difw |
| Open Datasets | Yes | The UCR (Dau et al., 2019) time series classification archive contains 85 real-world datasets and we use a subset containing 84 datasets, as in DTAN (Weber et al., 2019) and Res Net TW (Huang et al., 2021). Details about these datasets can be found on Appendix J. |
| Dataset Splits | Yes | Experiments were conducted with the provided train and test split. |
| Hardware Specification | Yes | We used the following computing infrastructure in our experiments: Intel(R) Core(TM) i7-6560U CPU @2.20GHz, 4 cores, 16gb RAM with an Nvidia Tesla P100 graphic card. |
| Software Dependencies | No | The paper mentions software like "Py Torch", "Num Py", "C++", and "CUDA" but does not provide specific version numbers for these dependencies. |
| Experiment Setup | Yes | For each of the UCR datasets, we train our TTN for joint alignment as in (Weber et al., 2019), where NP {16, 32, 64}, λσ {10 3, 10 2}, λs {0.1, 0.5}, the number of transformer layers {1, 5}, scaling-and-squaring iterations {0, 8} and the option to apply the zero-boundary constraint. ... The network was initialized by Xavier initialization using a normal distribution and was trained for 500 epochs with 10 5 learning rate, a batch size of 32 and Adam (Kingma & Ba, 2014) optimizer with β1 = 0.9, β2 = 0.98 and ϵ = 10 8. |