CauDiTS: Causal Disentangled Domain Adaptation of Multivariate Time Series
Authors: Junxin Lu, Shiliang Sun
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Cau Di TS is evaluated on four benchmark datasets, demonstrating its effectiveness and outperforming state-of-the-art methods. |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology, East China Normal University, Shanghai, China. 2Department of Automation, Shanghai Jiao Tong University, Shanghai, China. Correspondence to: Shiliang Sun <slsun@cs.ecnu.edu.cn>. |
| Pseudocode | Yes | The pseudocode for Cau Di TS is presented in Algorithm 1. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing the source code or a direct link to a code repository for the described methodology. |
| Open Datasets | Yes | We conduct experiments on four real-world multivariate time-series datasets: Boiler (Cai et al., 2021), WISDM (Kwapisz et al., 2011), HAR (Anguita et al., 2013) and HHAR (Stisen et al., 2015). |
| Dataset Splits | Yes | Following CLUDA (Ozyurt et al., 2022), we split each dataset into training, validation and test with a ratio of 0.7, 0.15 and 0.15, respectively. |
| Hardware Specification | Yes | The training, validation, and testing of all baselines and Cau Di TS are conducted on identical dataset splits using an NVIDIA Ge Force GTX 4090 with 24GB GPU memory and the Py Torch framework. |
| Software Dependencies | Yes | The training, validation, and testing of all baselines and Cau Di TS are conducted on identical dataset splits using an NVIDIA Ge Force GTX 4090 with 24GB GPU memory and the Py Torch framework. ... All methods are optimized using an Adam optimizer with β1 = 0.5 and β2 = 0.999. |
| Experiment Setup | Yes | We train each method for a maximum of 15000 training steps with a batch size of 32 for WISDM, HAR, and HHAR, and 128 for Boiler. All methods are optimized using an Adam optimizer with β1 = 0.5 and β2 = 0.999. The sliding window τmax is set to 128 for WISDM, HAR and HHAR, and 36 for Boiler. ... Table 4 lists the tuning ranges of all hyperparameters for all methods. The search range for the learning rates is from 1 10 4 to 1 10 1. |