Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Graph-Aware Contrasting for Multivariate Time-Series Classification
Authors: Yucheng Wang, Yuecong Xu, Jianfei Yang, Min Wu, Xiaoli Li, Lihua Xie, Zhenghua Chen
AAAI 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that our proposed method achieves state-of-the-art performance on various MTS classification tasks. and Experimental Results Datasets We examine our method on ten public MTS datasets for classification, including Human Activity Recognition (HAR) (Anguita et al. 2012), ISRUC (Khalighi et al. 2016), and eight large datasets from UEA archive, i.e., Articulary Word Recognition (AWR), Finger Movements (FM), Spoken Arabic Digits Eq (SAD), Character Trajectories (CT), Face Detection (FD), Insect Wingbeat (IW), Motor Imagery (MI), and Self Regulation SCP1 (SRSCP1). |
| Researcher Affiliation | Academia | 1Institute for Infocomm Research, A*STAR, Singapore 2Centre for Frontier AI Research, A*STAR, Singapore 3Nanyang Technological University, Singapore |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks (e.g., a figure or section explicitly labeled 'Algorithm' or 'Pseudocode'). |
| Open Source Code | Yes | The code is available at https://github.com/Frank-Wang-oss/TS-GAC. |
| Open Datasets | Yes | We examine our method on ten public MTS datasets for classification, including Human Activity Recognition (HAR) (Anguita et al. 2012), ISRUC (Khalighi et al. 2016), and eight large datasets from UEA archive, i.e., Articulary Word Recognition (AWR), Finger Movements (FM), Spoken Arabic Digits Eq (SAD), Character Trajectories (CT), Face Detection (FD), Insect Wingbeat (IW), Motor Imagery (MI), and Self Regulation SCP1 (SRSCP1). |
| Dataset Splits | No | The paper states: 'For HAR and ISRUC, we randomly split them into 80% and 20% for training and testing, while for those from UEA archive, we directly adopt their pre-defined train-test splits.' It does not explicitly mention a 'validation' set or split. |
| Hardware Specification | Yes | All methods are conducted with NVIDIA Ge Force RTX 3080Ti |
| Software Dependencies | No | The paper mentions implementation in 'Py Torch' but does not provide a specific version number for the software dependency. |
| Experiment Setup | Yes | We set the batch size as 128 and choose ADAM as the optimizer with a learning rate of 3e4. We pre-train the model and train the linear classifier 40 epochs. |