When Do GNNs Work: Understanding and Improving Neighborhood Aggregation
Authors: Yiqing Xie, Sha Li, Carl Yang, Raymond Chi-Wing Wong, Jiawei Han
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that allowing for node-specific aggregation degrees have significant advantage over current GNNs. |
| Researcher Affiliation | Academia | 1University of Illinois at Urbana-Champaign, IL, USA 2The Hong Kong University of Science and Technology, Hong Kong, China 3Emory University, GA, USA |
| Pseudocode | No | No explicitly labeled “Pseudocode” or “Algorithm” block was found. The model is described through mathematical equations and textual explanations. |
| Open Source Code | Yes | All data is publicly available and our code can be found at https://github. com/raspberryice/ala-gcn. |
| Open Datasets | Yes | We conduct the experiments on three citation networks: Cora, Cite Seer [Sen et al., 2008] and Pub Med [Namata et al., 2012]. |
| Dataset Splits | Yes | We tested with 1%, 3% and 5% training size for Cora and Cite Seer, and with 0.3%, 0.15% and 0.05% training size for Pub Med. ... We also run the experiments with the standard 20 labeled nodes per class setting to compare with reported performance. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments are provided in the paper. |
| Software Dependencies | Yes | We implement our models with pytorch 1.4.01 and DGL2 [Wang et al., 2019]. |
| Experiment Setup | No | The paper states: “We use the same set of hyper-parameters for ALa GAT and GAT, and use another set of hyper-parameters for GCN, Graph SAGE and ALa GCN. For other models, we use their default parameters.” However, no specific hyperparameter values or detailed training configurations are explicitly listed in the main text. |