Time Series Domain Adaptation via Sparse Associative Structure Alignment

Authors: Ruichu Cai, Jiawei Chen, Zijian Li, Wei Chen, Keli Zhang, Junjian Ye, Zhuozhang Li, Xiaoyan Yang, Zhenjie Zhang6859-6867

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental studies not only verify the good performance of our methods on three real-world datasets but also provide some insightful discoveries on the transferred knowledge.
Researcher Affiliation Collaboration 1Guangdong University of Technology 2Huawei Noah s Ark Lab
Pseudocode No The paper describes its method using equations but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code No The paper does not provide a specific repository link or an explicit statement about releasing the source code for the described methodology.
Open Datasets Yes The air quality forecast dataset(Zheng et al. 2015) is collected in the Urban Air project1 from 2014/05/01 to 2015/04/30, which contains air quality data, meteorological data, and weather forecast data, etc. ... 1https://www.microsoft.com/en-us/research/project/urban-air/ MIMICIII(Johnson et al. 2016; Che et al. 2018)2 is another published dataset... 2https://mimic.physionet.org/gettingstarted/demo/
Dataset Splits No The paper describes the datasets used and the setup of comparisons (e.g., 'unlabeled target data' is used), but it does not specify explicit training/validation/test percentages, absolute sample counts for each split, or detailed splitting methodologies like k-fold cross-validation or specific random seeds for data partitioning.
Hardware Specification No The paper does not specify any hardware details such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, or specific libraries with versions) that would be needed to replicate the experiment.
Experiment Setup No The paper states 'We use the same parameter combination on each dataset and also apply three different random seeds to each experiment' but does not provide the specific values for hyperparameters (e.g., learning rate, batch size, number of epochs, optimizer settings) or other detailed training configurations.