Learnable Group Transform For Time-Series

Authors: Romain Cosentino, Behnaam Aazhang

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experiments on diverse time-series datasets demonstrate the expressivity of this framework, which competes with state-of-the-art performances. For all the experiments and all the settings, i.e., LGT, n LGT, c LGT, cn LGT, the increasing and continuous piecewise affine map is initialized randomly, and the optimization is performed with Adam Optimizer, and the number of knots of each piecewise affine map is 256. The mother filter used for our setting is a Morlet wavelet filter.
Researcher Affiliation Academia Romain Cosentino 1 Behnaam Aazhang 1 1Department of Electrical and Computer Engineering, Rice University, USA. Correspondence to: Romain Cosentino <rom.cosentino@gmail.com>.
Pseudocode No The paper describes the approach in a bulleted list 'We can summarize our approach to...' but does not present it as formal pseudocode or an algorithm block.
Open Source Code Yes The code of the LGT framework is provided in the following repository https://github.com/Koldh/Learnable Group Transform Time Series.
Open Datasets Yes The dataset is extracted from the Freesound audio archive (Stowell & Plumbley, 2013). The Haptics dataset is a classification problem with five classes and 155 training and 308 testing samples from the UCR Time Series Repository (Chen et al., 2015), where each time-series has 1092 time samples.
Dataset Splits No For the Haptics dataset, it mentions '155 training and 308 testing samples' and for bird detection 'a test set consisting of 33% of the total dataset'. While early stopping is mentioned for the bird dataset, implying a validation set was used, the paper does not provide specific split percentages or counts for a validation set across its experiments.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used to run its experiments.
Software Dependencies No The paper mentions 'Adam Optimizer' and '1-layer Re LU Neural Network' but does not provide specific version numbers for software dependencies such as programming languages, libraries, or frameworks.
Experiment Setup Yes For all the experiments and all the settings, i.e., LGT, n LGT, c LGT, cn LGT, the increasing and continuous piecewise affine map is initialized randomly, and the optimization is performed with Adam Optimizer, and the number of knots of each piecewise affine map is 256. The mother filter used for our setting is a Morlet wavelet filter. For all models, set the batch size to 10, the number of epochs to 50. batch size, i.e., 10, and the learning rate cross-validation grid