Towards Dynamic Spatial-Temporal Graph Learning: A Decoupled Perspective
Authors: Binwu Wang, Pengkun Wang, Yudong Zhang, Xu Wang, Zhengyang Zhou, Lei Bai, Yang Wang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on dynamic spatial-temporal graph datasets further demonstrate the competitive performance, superior efficiency, and strong scalability of the proposed framework. and We evaluate our framework on real-world datasets, and experimental results demonstrate the superiority of our framework in prediction performance, training efficiency, and scalability for new knowledge. |
| Researcher Affiliation | Collaboration | Binwu Wang1,2, Pengkun Wang1,2, Yundong Zhang1,2, Xu Wang1,2, Zhengyang Zhou1,2, Lei Bai3, Wang Yang1,2,* 1University of Science and Technology of China 2Suzhou Institute of Advanced Research, University of Science and Technology of China 3Shanghai AI Laboratory |
| Pseudocode | Yes | The details are shown in Fig. 1 and the pseudo-code is shown in Algorithm.1. and Algorithm 1: Decoupled Training Strategy for Dynamic Spatiotemporal Graph Learning |
| Open Source Code | No | The paper does not provide an explicit statement or link to open-source code for the described methodology. |
| Open Datasets | Yes | To evaluate the generalization performance of the framework, we evaluate it on the Knowair dataset (Wang et al. 2020) from the atmospheric domain |
| Dataset Splits | Yes | For two training strategies, we split the training data along the temporal dimension into training datasets and validation datasets with a ratio of 7:3. |
| Hardware Specification | Yes | We report the total time of training and validation for efficiency evaluation on A100 GPUs |
| Software Dependencies | No | The paper mentions using the Adamw optimizer but does not specify any software dependencies with version numbers (e.g., PyTorch, TensorFlow, Python versions). |
| Experiment Setup | Yes | The learning rate is set to 10 3. Sampling node ratio kr and ke are equal to 4% and 1%, respectively. The maximum epoch is 100. We set the forecasting length and lookback length to 12. For two training strategies, we split the training data along the temporal dimension into training datasets and validation datasets with a ratio of 7:3. |