Graph Pointer Neural Networks

Authors: Tianmeng Yang, Yujing Wang, Zhihan Yue, Yaming Yang, Yunhai Tong, Jing Bai8832-8839

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted on six public node classification datasets with heterophilic graphs. The results show that GPNN significantly improves the classification performance of state-of-the-art methods. In addition, analyses also reveal the privilege of the proposed GPNN in filtering out irrelevant neighbors and reducing over-smoothing.
Researcher Affiliation Collaboration 1School of Electronics Engineering and Computer Science, Peking University 2Microsoft Research Asia {youngtimmy, zhihan.yue, yhtong}@pku.edu.cn, {yujwang, yayaming, jbai}@microsoft.com
Pseudocode Yes Algorithm 1: Multi-hop node sequence sampling
Open Source Code No The paper does not provide concrete access to source code for the methodology described, such as a repository link or an explicit code release statement.
Open Datasets Yes We evaluate our proposed graph pointer neural networks (GPNN) on six public heterophilic graph datasets. The dataset statistics are summarized in Table 1. For all datasets, we use the same feature vectors, labels and ten random splits provided by Pei et al. (2020).
Dataset Splits Yes For all datasets, we use the same feature vectors, labels and ten random splits provided by Pei et al. (2020). We run 2000 epochs and apply an early stopping strategy with a patience of 100 epochs on both the cross-entropy loss and accuracy on the validation set to choose the best model.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No Our methods are implemented using Pytorch and Pytorch Geometric. The paper mentions software names but does not include specific version numbers for these dependencies.
Experiment Setup Yes For GPNN, the depth of node sampling is 2 with a max sequence length of 16. Other hyperparameters are tuned on the validation set: hidden unit {16, 32, 64}, learning rate {0.01, 0.005}, dropout in each layer {0, 0.5, 0.99}, weight decay {1E-3, 5E-4, 5E-5, 5E-6}, number of the selected nodes from each sequence {1, 2, 4, 8}.