Neo-GNNs: Neighborhood Overlap-aware Graph Neural Networks for Link Prediction

Authors: Seongjun Yun, Seoyoon Kim, Junhyun Lee, Jaewoo Kang, Hyunwoo J. Kim

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments In this section, we evaluate the benefits of our method against state-of-the-art models on link prediction benchmarks. Then we analyze the contribution of each component in Neo-GNNs and show how Neo-GNNs can actually generalize and learn neighborhood overlap-based heuristic methods. 4.1 Experiment Settings Datasets. We evaluate the effectiveness of our Neo-GNNs for link prediction on Open Graph Benchmark datasets [41] (OGB) : OGB-PPA, OGB-Collab, OGB-DDI, OGB-Citation2.
Researcher Affiliation Academia Seongjun Yun, Seoyoon Kim, Junhyun Lee, Jaewoo Kang , Hyunwoo J. Kim Department of Computer Science and Engineering Korea University {ysj5419, sykim45, ljhyun33, kangj, hyunwoojkim}@korea.ac.kr
Pseudocode No No explicit pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper mentions using implementations from 'Py Torch Geometric [44]' and 'the official github repository for SEAL', but does not state that the code for Neo-GNNs is open-source or provide a link.
Open Datasets Yes We evaluate the effectiveness of our Neo-GNNs for link prediction on Open Graph Benchmark datasets [41] (OGB) : OGB-PPA, OGB-Collab, OGB-DDI, OGB-Citation2.
Dataset Splits Yes Table 1: Split ratio OGB-PPA 70/20/10, OGB-COLLAB 92/4/4, OGB-DDI 80/10/10, OGB-CITATION2 98/1/1
Hardware Specification Yes The experiments are conducted on a RTX 3090 (24GB) and a Quadro RTX (48GB).
Software Dependencies No The paper mentions 'Py Torch' and 'Py Torch Geometric [44]' for implementations but does not specify their version numbers.
Experiment Setup Yes We set the number of layers to 3 and latent dimensionality to 256 for all GNN-based models. To train our method, we used GCN as a feature-based GNN based model and all MLP models in our Neo-GNNs consist of 2 fully connected layers. We jointly trained feature-based GNNs and Neo-GNNs.