$p$-Laplacian Based Graph Neural Networks

Authors: Guoji Fu, Peilin Zhao, Yatao Bian

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical studies on real-world and synthetic datasets validate our findings and demonstrate that p GNNs significantly outperform several state-of-the-art GNN architectures on heterophilic benchmarks while achieving competitive performance on homophilic benchmarks.
Researcher Affiliation Industry 1Tencent AI Lab, Shenzhen, China. Correspondence to: Guoji Fu <guoji.leo.fu@gmail.com>, Yatao Bian <yatao.bian@gmail.com>.
Pseudocode No The paper describes its methods through mathematical equations and textual explanations, but it does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes Code available at https://github.com/guoji-fu/p GNNs.
Open Datasets Yes We use seven homophilic benchmark datasets: citation graphs Cora, Cite Seer, Pub Med (Sen et al., 2008), Amazon copurchase graphs Computers, Photo, coauthor graphs CS, Physics (Shchur et al., 2018), and six heterophilic benchmark datasets: Wikipedia graphs Chameleon, Squirrel (Rozemberczki et al., 2021), the Actor cooccurrence graph, webpage graphs Wisconsin, Texas, Cornell (Pei et al., 2020). The node classification tasks are conducted in the transductive setting. Following Chien et al. (2021), we use the sparse splitting (2.5%/2.5%/95%) and the dense splitting (60%/20%/20%) to randomly split the homophilic and heterophilic graphs into training/validation/testing sets, respectively. Dataset statistics and their levels of homophily are presented in Appendix E.
Dataset Splits Yes Following Chien et al. (2021), we use the sparse splitting (2.5%/2.5%/95%) and the dense splitting (60%/20%/20%) to randomly split the homophilic and heterophilic graphs into training/validation/testing sets, respectively.
Hardware Specification No The paper does not explicitly provide specific hardware details such as GPU models, CPU types, or memory amounts used for running the experiments.
Software Dependencies No The paper mentions using 'Pytorch Geometric library' but does not specify its version number or versions for other key software dependencies required for replication.
Experiment Setup Yes We set the number of layers as 2, the maximum number of epochs as 1000, the number for early stopping as 200, the weight decay as 0 or 0.0005 for all models. The other hyperparameters for each model are listed as below: 1.0GNN, 1.5GNN, 2.0GNN, 2.5GNN: Number of hidden units: 16 Learning rate: {0.001, 0.01, 0.05} Dropout rate: {0, 0.5} ยต: {0.01, 0.1, 0.2, 1, 10} K: 4, 6, 8