Boosting Transferability and Discriminability for Time Series Domain Adaptation
Authors: Mingyang Liu, Xinyang Chen, Yang Shu, Xiucheng Li, Weili Guan, Liqiang Nie
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on a wide range of time series datasets and five common applications demonstrate the state-of-the-art performance of ACON. |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen) 2School of Data Science and Engineering, East China Normal University 3School of Electronics and Information Engineering, Harbin Institute of Technology (Shenzhen) |
| Pseudocode | No | The paper includes diagrams and descriptions of the architecture but no formal pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/mingyangliu1024/ACON. |
| Open Datasets | Yes | Experiments using benchmark datasets in sensor-based human activity recognition (HAR) task: UCIHAR [1], HHAR [35] and WISDM[20]. ... For EMG dataset, we use the processed version released by DIVERSIFY [28]. For PCL, CAP and HHAR-D datasets, we use the processed versions released by WOODS [11]. For UCIHAR, HHAR-P, WISDM and FD datasets, we use the processed versions released by Ada Time [31]. |
| Dataset Splits | Yes | Each domain of datasets is randomly divided into 80% training, and 20% testing. |
| Hardware Specification | Yes | The experiments were conducted on a single NVIDIA Ge Force696 RTX 4090 with 24Gi B of memory. |
| Software Dependencies | No | The paper mentions using 1D-CNN and complex-valued linear layers as feature extractors, but it does not specify software dependencies with version numbers (e.g., PyTorch 1.x, TensorFlow 2.x). |
| Experiment Setup | Yes | Here we report other key hyperparameters for ACON in Table 6. Table 6: Key hyperparameters for ACON. Hyperparameter: Epoch, Batch Size, Learning Rate. |