Revisiting Heterophily For Graph Neural Networks
Authors: Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Mingde Zhao, Shuyuan Zhang, Xiao-Wen Chang, Doina Precup
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | When evaluated on 10 benchmark node classification tasks, ACM-augmented baselines consistently achieve significant performance gain, exceeding state-of-theart GNNs on most tasks without incurring significant computational burden. Code: https://github.com/Sitao Luan/ACM-GNN |
| Researcher Affiliation | Collaboration | 1Mc Gill University; 2Mila; 3Deep Mind |
| Pseudocode | No | The paper describes the steps of the ACM framework but does not provide a formally labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | Code: https://github.com/Sitao Luan/ACM-GNN |
| Open Datasets | Yes | We run 10 times on each of the 9 benchmark datatsets, Cornell, Wisconsin, Texas, Film, Chameleon, Squirrel, Cora, Citeseer and Pubmed used in [37, 36], with the same 60%/20%/20% random splits for train/validation/test used in [8] and report the average test accuracy as well as the standard deviation. We also record the average running time per epoch (in milliseconds) to compare the computational efficiency. We set the temperature T in equation 4.2 to be 3, which is the number of channels. |
| Dataset Splits | Yes | We run 10 times on each of the 9 benchmark datatsets, Cornell, Wisconsin, Texas, Film, Chameleon, Squirrel, Cora, Citeseer and Pubmed used in [37, 36], with the same 60%/20%/20% random splits for train/validation/test used in [8] and report the average test accuracy as well as the standard deviation. |
| Hardware Specification | No | The paper states that information on compute resources is provided in a checklist item (3.d), but the main text does not explicitly detail specific hardware models (e.g., GPU/CPU models or cloud instance types) used for the experiments. |
| Software Dependencies | No | The paper does not explicitly state specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) in the main text. |
| Experiment Setup | Yes | We set the temperature T in equation 4.2 to be 3, which is the number of channels. [...] We use ˆArw as the LP filter and the corresponding HP filter is I ˆArw 14. Both filters are deterministic. [...] We run 10 times on each of the 9 benchmark datatsets, Cornell, Wisconsin, Texas, Film, Chameleon, Squirrel, Cora, Citeseer and Pubmed used in [37, 36], with the same 60%/20%/20% random splits for train/validation/test used in [8] and Adam [18] optimizer as used in GPRGNN [8]. |