Fully-Connected Spatial-Temporal Graph for Multivariate Time-Series Data
Authors: Yucheng Wang, Yuecong Xu, Jianfei Yang, Min Wu, Xiaoli Li, Lihua Xie, Zhenghua Chen
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show the effectiveness of FC-STGNN on multiple MTS datasets compared to SOTA methods. We examine our method on three different downstream tasks: Remaining Useful Life (RUL) prediction, Human Activity Recognition (HAR), and Sleep Stage Classification (SSC). Specifically, we utilize C-MAPSS (Saxena et al. 2008) for RUL prediction, UCI-HAR (Anguita et al. 2012) for HAR, and ISRUC-S3 (Khalighi et al. 2016) for SSC, following the previous work (Wang et al. 2023a). |
| Researcher Affiliation | Collaboration | 1Institute for Infocomm Research, A*STAR, Singapore 2Centre for Frontier AI Research, A*STAR, Singapore 3Nanyang Technological University, Singapore |
| Pseudocode | No | The paper does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | The code is available at https://github.com/Frank-Wang-oss/FCSTGNN. |
| Open Datasets | Yes | Specifically, we utilize C-MAPSS (Saxena et al. 2008) for RUL prediction, UCI-HAR (Anguita et al. 2012) for HAR, and ISRUC-S3 (Khalighi et al. 2016) for SSC, following the previous work (Wang et al. 2023a). |
| Dataset Splits | Yes | For C-MAPSS which includes four sub-datasets, we adopt the pre-defined train-test splits. The training dataset is further divided into 80% and 20% for training and validation. For HAR and ISRUC, we randomly split them into 60%, 20%, and 20% for training, validating, and testing. |
| Hardware Specification | Yes | All methods are conducted with NVIDIA Ge Force RTX 3080Ti and implemented by Py Torch 1.9. |
| Software Dependencies | Yes | All methods are conducted with NVIDIA Ge Force RTX 3080Ti and implemented by Py Torch 1.9. |
| Experiment Setup | Yes | We set the batch size as 100, choose ADAM as the optimizer with a learning rate of 1e-3, and train the model 40 epochs. |