Regularization-free Diffeomorphic Temporal Alignment Nets

Authors: Ron Shapira Weber, Oren Freifeld

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on 128 UCR datasets show that the proposed method outperforms contemporary methods despite not using a regularization.
Researcher Affiliation Academia 1Ben-Gurion University. Correspondence to: Ron Shapira Weber <ronsha@post.bgu.ac.il>, Oren Freifeld <orenfr@cs.bgu.ac.il>.
Pseudocode Yes Algorithm 1 The JA training with an ICAE loss
Open Source Code Yes Our code is available at https: //github.com/BGU-CS-VIL/RF-DTAN.
Open Datasets Yes Extensive experiments on 128 UCR datasets show that the proposed method outperforms contemporary methods despite not using a regularization.
Dataset Splits Yes In all of the experiments, we used the train/test splits provided by the archive. ... To account for random initializations and the stochastic nature of DL training, in each of the 3 cases we performed 5 runs on each dataset and report both the median and best results;
Hardware Specification Yes We used a machine with 12 CPU-cores, 32Gb RAM, and an RTX 3090 graphic card.
Software Dependencies Yes The Py Torch TSAI implementation of the Inception Time was taken from (Oguiza, 2022). In the timing experiments ( 4.3), for DTW, DBA, and Soft DTW we used the tslearn package (Tavenard, 2017).
Experiment Setup Yes In all of our DTAN experiments, training was done via the Adam optimizer (Kingma & Ba, 2014) for 1500 epochs, batch size of 64, Np (the number of subintervals in the partition of Ω) was 16, and the scaling-andsquaring parameter (used by DIFW) was 8.