Multi-Modality Spatio-Temporal Forecasting via Self-Supervised Learning
Authors: Jiewen Deng, Renhe Jiang, Jiaqi Zhang, Xuan Song
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiment results on two real-world Mo ST datasets verify the superiority of our approach compared with the state-of-the-art baselines. |
| Researcher Affiliation | Academia | Jiewen Deng1 , Renhe Jiang2 , Jiaqi Zhang1 and Xuan Song1,3 1Southern University of Science and Technology 2The University of Tokyo 3Jilin University |
| Pseudocode | No | The paper describes the model architecture and components in detail (Section 3) but does not provide any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Model implementation is available at https://github.com/ beginner-sketch/Mo SSL. |
| Open Datasets | No | The paper mentions “two real-world Mo ST datasets, namely NYC Traffic Demand and BJ Air Quality” and provides characteristics in Table 1, but does not provide specific links, DOIs, or explicit statements about their public availability with proper attribution or repository information. |
| Dataset Splits | No | The paper mentions “The training phase is performed using the Adam optimizer, and the batch size is 16” and input/output horizons in Table 1, but does not specify the train, validation, and test dataset splits by percentage or sample count. |
| Hardware Specification | No | The paper states “We implement the network with the Pytorch toolkit” but does not provide any specific hardware details such as GPU or CPU models used for the experiments. |
| Software Dependencies | No | The paper mentions “We implement the network with the Pytorch toolkit” but does not specify any version numbers for Pytorch or other software dependencies. |
| Experiment Setup | Yes | For the model, the layers of Mo ST Encoder is 4, where the kernel size of each dilated causal convolution component is 2, and the related expansion rate is {2, 4, 8, 16} in each layer. ... The number of cluster components K and the dimension of hidden channels dz are set to 4 and 48. The training phase is performed using the Adam optimizer, and the batch size is 16. In addition, the inputs are normalized by Z-Score. |