Contrastive Learning for Unsupervised Domain Adaptation of Time Series
Authors: Yilmazcan Ozyurt, Stefan Feuerriegel, Ce Zhang
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our framework using a wide range of time series datasets to demonstrate its effectiveness and show that it achieves state-of-the-art performance for time series UDA. |
| Researcher Affiliation | Academia | Yilmazcan Ozyurt ETH Zürich yozyurt@ethz.ch Stefan Feuerriegel LMU Munich feuerriegel@lmu.de Ce Zhang ETH Zürich ce.zhang@inf.ethz.ch |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. The framework is described narratively and visually (Figure 1). |
| Open Source Code | Yes | 1Codes are available at https://github.com/oezyurty/CLUDA . |
| Open Datasets | Yes | We conduct extensive experiments using established benchmark datasets, namely WISDM (Kwapisz et al., 2011), HAR (Anguita et al., 2013), and HHAR (Stisen et al., 2015). |
| Dataset Splits | Yes | We split the patients of each dataset into 3 parts for training/validation/testing (ratio: 70/15/15). |
| Hardware Specification | Yes | For training and testing, we used NVIDIA Ge Force GTX 1080 Ti with 11GB GPU memory. |
| Software Dependencies | No | The paper mentions "Py Torch" as the implementation framework but does not specify a version number or other software dependencies with version numbers. |
| Experiment Setup | Yes | In this section, we provide details on the hyperparameters tuning. Table 7 lists the tuning range of all hyperparameters. |