Understanding Heterophily for Graph Neural Networks

Authors: Junfu Wang, Yuanfang Guo, Liang Yang, Yunhong Wang

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on both synthetic and realworld data verify the effectiveness of our theory.
Researcher Affiliation Academia 1State Key Laboratory of Software Development Environment, Beihang University, Beijing, China 2School of Computer Science and Engineering, Beihang University, Beijing, China 3Shen Yuan Honors College, Beihang University, Beijing, China 4School of Artificial Intelligence, Hebei University of Technology, Tianjin, China.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access to source code for the methodology described in this paper, nor does it explicitly state that code will be released.
Open Datasets Yes We employ eight node classification datasets to verify the effectiveness of our theory, including two citation network (i.e. Cora (Yang et al., 2016) and Arxiv-year (Lim et al., 2021)), two Wikipedia networks (i.e. Chameleon and Squirrel) (Pei et al., 2020), a co-occurrence network (i.e. Actor (Pei et al., 2020)), a co-purchasing network (i.e., Amazon-ratings (Platonov et al., 2023)), a crowdsourcing co-working network(i.e., Workers (Platonov et al., 2023)), and a patent network(i.e., Snap-patents) (Platonov et al., 2023).
Dataset Splits Yes For each dataset, we randomly selected 60%/20%/20% nodes to construct the training/validation/testing sets.
Hardware Specification No The paper does not provide specific details about the hardware used for running its experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions that the code is implemented in NumPy and uses the Adam optimizer, but it does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes The number of hidden neurons, learning rate, weight decay rate, and dropout rate are obtained by the grid-search strategy. These hyperparameters are search in: number of hidden neurons: hidden [16, 32, 64, 128, 256]; learning rate: lr [0.001, 0.005, 0.01]; weight decay rate: wd [0, 1e 5, 5e 4, 1e 4]; dropout rate: dropout [0, 0.2, 0.5].