Linkless Link Prediction via Relational Distillation

Authors: Zhichun Guo, William Shiao, Shichang Zhang, Yozen Liu, Nitesh V Chawla, Neil Shah, Tong Zhao

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that LLP boosts the link prediction performance of MLPs with significant margins, and even outperforms the teacher GNNs on 7 out of 8 benchmarks. LLP also achieves a 70.68 speedup in link prediction inference compared to GNNs on the large-scale OGB dataset.
Researcher Affiliation Collaboration 1Department of Computer Science and Engineering, University of Notre Dame, IN, USA 2Department of Computer Science and Engineering, University of California, Riverside, CA, USA 3Department of Computer Science, University of California, Los Angeles, CA, USA 4Snap Inc., CA, USA.
Pseudocode No No pseudocode or clearly labeled algorithm block was found in the paper.
Open Source Code Yes To ensure the reproducibility of LLP, our implementation is publically available at https://github. com/snap-research/linkless-link-prediction/.
Open Datasets Yes We conduct the experiments using 8 commonly used benchmark datasets for link prediction: Cora, Citeseer, Pubmed, Computers, Photos, CS, Physics, and Collab. The statistics of the datasets are shown in Table 1 with further details provided in Appendix B.
Dataset Splits Yes Following previous works (Zhang & Chen, 2018; Chami et al., 2019; Cai et al., 2021) we randomly sample 5%/15% of the links with the same number of no-edge node pairs from the graph as the validation/test sets on the non-OGB datasets. And the validation/test links are masked off from the training graph. For the OGB datasets, we follow their official train/validation/test splits (Wang et al., 2020a).
Hardware Specification Yes We conduct experiments with NVIDIA V100 GPU(16GB memory). For Citation2 and IGB datasets, we run the experiments on NVIDIA A100 GPU with 40GB memory.
Software Dependencies No Our implementation is based on Py Torch Geometric (Fey & Lenssen, 2019). No specific version numbers for PyTorch Geometric or other key software libraries like PyTorch itself are provided.
Experiment Setup Yes For LLP, we conduct the hyperparameter search of the weights for Lsup, LLLP R and LLLP D from [0.001, 0.01, 0.1, 1, 10, 100, 1000], the number of the nearby nodes p from [1,2,3,4,5], the random sampling rate q/p from [1, 3, 5, 10, 15], the learning rate from [0.001, 0.005] and the dropout rate from [0, 0.5].