GReTo: Remedying dynamic graph topology-task discordance via target homophily
Authors: Zhengyang Zhou, Qihe Huang, Gengyu Lin, Kuo Yang, LEI BAI, Yang Wang
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, our solution achieves significant improvements against best baselines, notably improving 24.79% on Know Air and 3.60% on Metr-LA. We evaluate our solution on four dynamic graphs and successfully achieve 3.20% to 24.79% improvements against baselines on MAPE, where Know Air ( 24.79%) with higher intra-graph negative heterophily ratios (Tab. 4) especially benefits from flexible signed message passing. |
| Researcher Affiliation | Academia | 1Key Laboratory of Precision and Intelligent Chemistry, University of Science and Technology of China (USTC), Hefei, China 2School of Software Engineering, USTC. 3School of Computer Science and Technology, USTC. 4Shanghai AI Laboratory, Shanghai, China |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | Our code is available at https://github.com/zzyy0929/ICLR23-GRe To. |
| Open Datasets | Yes | Traffic: (1) Metr-LA: Highway traffic status consisting of 207 loop detectors of Los Angeles (Li et al., 2018). (2) Pe MS-Bay: Traffic statuses collected by California Transportation Agency, including 325 sensors in Bay Area (Li et al., 2018). Climate: (3) Know Air : PM2.5 Concentrations, covering 184 main cities in China (Wang et al., 2020). (4) Temperature: Urban Temperatures of the same 184 cities as Know Air (Wang et al., 2020). |
| Dataset Splits | No | No specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) needed to reproduce the data partitioning was explicitly provided in the main text. It mentions 'Training' and 'Evaluations' but not the specific splits used. |
| Hardware Specification | No | No specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running experiments are provided. |
| Software Dependencies | No | The paper mentions software components like 'Adam SGD', 'LSTMs', and 'Conv1D' but does not provide specific version numbers for any libraries or frameworks used (e.g., PyTorch version, Python version, specific library versions). |
| Experiment Setup | Yes | The homophily criteria ε are set as 0.08, 0.10, 0.10 and 0.12, while the maximal propagation steps K are set as 6, 6, 3, 4 on Metr LA, Pe MS-Bay, Know Air and Temperature, according to empirical evaluations. The size of TCN kernels is set to 1 3 on all datasets. For fairness, for each compared GNN, the number of hidden layers is set to 6 and the hidden dimension for each GCN is set to 64. For GAT, the number of heads is set to 8 according to the default setting in their papers. We initialize the learning rate of 1e-3 with a weight decay 0.99. |