Combining Label Propagation and Simple Models out-performs Graph Neural Networks
Authors: Qian Huang, Horace He, Abhay Singh, Ser-Nam Lim, Austin Benson
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Here, we show that for many standard transductive node classification benchmarks, we can exceed or match the performance of state-of-the-art GNNs by combining shallow models that ignore the graph structure with two simple post-processing steps that exploit correlation in the label structure: (i) an error correlation that spreads residual errors in training data to correct errors in test data and (ii) a prediction correlation that smooths the predictions on the test data. To demonstrate the effectiveness of our methods, we use nine datasets (Table 1). |
| Researcher Affiliation | Collaboration | Qian Huang , Horace He * , Abhay Singh , Ser-Nam Lim , Austin R. Benson Cornell University , Facebook , Facebook AI |
| Pseudocode | No | The paper describes the C&S procedure in text and with an illustration (Figure 1), but no formal pseudocode or algorithm block is provided. |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that the source code for the methodology described is publicly available. |
| Open Datasets | Yes | The Arxiv and Products datasets are from the Open Graph Benchmark (OGB) (Hu et al., 2020); the Cora, Citeseer, and Pubmed are three classic citation network benchmarks (Getoor et al., 2001; Getoor, 2005; Namata et al., 2012); and wiki CS is a web graph (Mernyei & Cangea, 2020). |
| Dataset Splits | Yes | The training/validation/test splits for Arxiv and Products are given by the benchmark, and the splits for wiki CS come from Mernyei & Cangea (2020). For the Rice, US counties, and email data, we use 40%/10%/50% random splits. For the smaller citation networks, we use 60%/20%/20% random splits, as in Wang & Leskovec (2020). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | All models were implemented with Py Torch (Paszke et al., 2019) and Py Torch Geometric (Fey & Lenssen, 2019). The paper mentions the software used but does not provide specific version numbers for these dependencies. |
| Experiment Setup | Yes | In all cases we use the Adam optimizer and tune the learning rate. We follow the models and hyperparameters provided in OGB (Hu et al., 2020) and wiki CS (Mernyei & Cangea, 2020) and manually tune some hyperparameters on the validation data for the potential of better performance. For our MLPs, every linear layer is followed by batch normalization, Re LU activation, and 0.5 dropout. The other parameters depend on the dataset as follows. Products and Arxiv: 3 layers and 256 hidden channels with learning rate equal to 0.01. |