Spatial-Temporal Graph Learning with Adversarial Contrastive Adaptation
Authors: Qianru Zhang, Chao Huang, Lianghao Xia, Zheng Wang, Siu Ming Yiu, Ruihua Han
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the performance of our Graph ST on various spatial-temporal mining tasks, including urban crime forecasting, traffic prediction, and house price prediction. We perform experiments on various datasets, which are described in detail in Table 1. |
| Researcher Affiliation | Academia | 1The Univeristy of Hong Kong, Hong Kong 2Nanyang Technological University, Singapore. |
| Pseudocode | Yes | As shown in Algorithm 1, our Graph ST model first constructs the multi-view region-wise graph using the three data views (POI, mobility, and distance). |
| Open Source Code | Yes | We release our model implementation via the link: https://github.com/HKUDS/Graph ST. |
| Open Datasets | No | The paper mentions using 'real-life datasets' collected from Chicago and New York City, and also refers to 'traffic benchmark datasets' and house price data, but does not provide concrete access information (specific links, DOIs, repositories, or formal citations with authors/year) for any of these datasets. |
| Dataset Splits | Yes | We follow the same settings as ST-SHN, including the region partition strategy (Chicago: 234, NYC: 180 regions), training/test data split, and evaluation metrics (MAE and MAPE). |
| Hardware Specification | Yes | The experiments are conducted on a server with 10 cores of Intel(R) Core(TM) i9-9820X CPU @ 3.30GHz, 64.0GB RAM, and one Nvidia Ge Force RTX 3090 GPU. |
| Software Dependencies | Yes | All methods are implemented in Python 3.8, Py Torch 1.7.0 (GPU version), and Tensor Flow 1.15.3 (GPU version) (STSHN). |
| Experiment Setup | Yes | To ensure a fair comparison, we set the dimensionality of the region representation d to 96, which is consistent with the settings used in previous works such as (Zhang et al., 2021a; Wu et al., 2022). We explore the number of graph propagation layers in the range of {1,2,3,4,5} and tune the learning rate to 0.0005 with weight decay of 0.01. We also tune the temperature parameter τ in the range of {0.2,0.4,0.6,0.8}. ... Finally, we tune the weights of the augmented SSL loss in the range of (0,1). |