Meta Propagation Networks for Graph Few-shot Semi-supervised Learning

Authors: Kaize Ding, Jianling Wang, James Caverlee, Huan Liu6524-6531

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our approach offers easy and substantial performance gains compared to existing techniques on various benchmark datasets.
Researcher Affiliation Academia 1 Arizona State University 2 Texas A&M University
Pseudocode Yes Algorithm 1: The learning algorithm of Meta-PN.
Open Source Code Yes The implementation and extended manuscript of this work are publicly available at https://github.com/kaize0409/Meta-PN.
Open Datasets Yes We conduct experiments on five graph benchmark datasets for semi-supervised node classification to demonstrate the effectiveness of the proposed Meta-PN. The detailed statistics of the datasets are summarized in Table 1. Specifically, Cora-ML, Cite Seer (Sen et al. 2008) and Pub Med (Namata et al. 2012) are the three most widely used citation networks. MS-CS is a co-authorship network based on the Microsoft Academic Graph (Shchur et al. 2018). For data splitting, we follow the previous work (Klicpera, Bojchevski, and Günnemann 2019) and split each dataset into training set (i.e., K nodes per class for K-shot task), validation set and test set. In addition, to further evaluate the performance of different methods on large-scale graphs, we further include the ogbn-arxiv datasets from Open Graph Benchmark (OGB) (Hu et al. 2020).
Dataset Splits Yes For data splitting, we follow the previous work (Klicpera, Bojchevski, and Günnemann 2019) and split each dataset into training set (i.e., K nodes per class for K-shot task), validation set and test set.
Hardware Specification Yes All our experiments are conducted with a 12 GB Ti-tan Xp GPU.
Software Dependencies No The paper mentions that 'The proposed Meta-PN is implemented in Py Torch' but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes We use a 2-layer MLP with 64 hidden units for the feature-label transformer. We set the batch size to 1,024 for Cora-ML and Citeseer, and 4,096 for the other datasets. We apply L2 regularization with λ = 0.005 on the weights of the first neural layer and set the dropout rate for both neural layers to be 0.3. For methods based on label propagation, we use K = 10 power iteration (propagation) steps by default. To make a fair comparison, we let all the configurations of the baselines be the same as Meta-PN including neural network layers, hidden units, regularization, propagation steps, early stopping and initialization. We use Adam to optimize the baseline methods as suggested and fine-tune for the corresponding learning rate on different datasets. Note that for all the datasets, we run each experiment 100 times with multiple random splits and different initializations.