Improving Generalization of Dynamic Graph Learning via Environment Prompt
Authors: Kuo Yang, Zhengyang Zhou, Qihe Huang, Limin Li, Yuxuan Liang, Yang Wang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on seven real-world datasets across domains showcase the superiority of Epo D against baselines, and toy example experiments further verify the powerful interpretability and rationality of our Epo D. |
| Researcher Affiliation | Academia | 1 University of Science and Technology of China (USTC), Hefei, China 2 Suzhou Institute for Advanced Research, USTC, Suzhou, China 3 State Key Laboratory of Resources and Environmental Information System, Beijing, China 4 The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China |
| Pseudocode | Yes | Algorithm 1: The training process of Epo D |
| Open Source Code | Yes | We release the code using anonymous links. |
| Open Datasets | Yes | We employ seven cross-domain real-world dynamic graph datasets to evaluate our Epo D. PEMS08 and PEMS04 [34] are classic medium-scale traffic network datasets from California with 5-minute intervals; SD and GBA [24] are newly proposed large-scale traffic network datasets. COLLAB [40] is an academic collaboration dataset comprising papers published in 16 years; Yelp [33] is a business review dataset; ACT [18] shows students actions on a MOOC platform over 30 days. |
| Dataset Splits | Yes | the training set is composed of data from 2019, while the data from 2020 is divided into a validation set and a test set. |
| Hardware Specification | Yes | We implement our Epo D with Py Torch 1.11.0 on a server with NVIDIA A100-PCIE-40GB. |
| Software Dependencies | Yes | We implement our Epo D with Py Torch 1.11.0 on a server with NVIDIA A100-PCIE-40GB. |
| Experiment Setup | Yes | In the experiments of traffic flow prediction, our task is to predict the next 24 steps based on historical 12 steps observations (12 24)... All experiments are repeated with 10 different random seeds... We set β = 0.2 is in our implementation... we set L = 5 as a trade-off between performance and time consumption. |