Disentangling Time Series Representations via Contrastive Independence-of-Support on l-Variational Inference

Authors: Khalid Oublal, Said Ladjal, David Benhaiem, Emmanuel LE BORGNE, François Roueff

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method qualitatively and quantitatively across various datasets with ground-truth labels, examining the generalization capabilities of the learned representations on correlated data.
Researcher Affiliation Collaboration Institute Polytechnique de Paris, Telecom Paris LTCI/S2A, One Tech Total Energies, DS&AI
Pseudocode Yes D.8 PSEUDOCODE DIOSC Cosine Similarity
Open Source Code Yes Code available at https://institut-polytechnique-de-paris.github. io/time-disentanglement-lib.
Open Datasets Yes Datasets. We conducted experiments on three public datasets: UK-DALE (Kelly & Knottenbelt, 2015), REDD (Kolter & Johnson, 2011), and REFIT (Murray et al., 2017) providing power measurements from multiple homes.
Dataset Splits No No explicit mention of a separate validation dataset split (percentage or count) was found in the paper. The paper specifies training and testing samples, for example: 'scenario A involved training on REFIT and testing on UK-DALE, 18.3k samples... the test set consisted of 3.5k samples'.
Hardware Specification Yes The experiments are performed on four NVIDIA A100 GPUs.
Software Dependencies No The paper does not provide specific version numbers for software dependencies or libraries used in the implementation, such as 'Python 3.8, PyTorch 1.9, and CUDA 11.1'.
Experiment Setup Yes Based on the grid search, we found that DIOSC s best performance is obtained by (λ = 2.3, β = 1.5). The experiments are performed on four NVIDIA A100 GPUs. Hyperparameter settings are available in Appendix D. And, we set L = 16, and we fix an time window input to 256 steps, for the latent space dimension we fix dz = 16.