Link Prediction with Non-Contrastive Learning
Authors: William Shiao, Zhichun Guo, Tong Zhao, Evangelos E. Papalexakis, Yozen Liu, Neil Shah
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this work, we extensively evaluate the performance of existing non-contrastive methods for link prediction in both transductive and inductive settings. |
| Researcher Affiliation | Collaboration | 1University of California, Riverside 2University of Notre Dame 3Snap Inc. |
| Pseudocode | Yes | Algorithm 1: Py Torch-style pseudocode for T-BGRL |
| Open Source Code | Yes | To ensure reproducibility, our source code is available online at https://github.com/ snap-research/non-contrastive-link-prediction. |
| Open Datasets | Yes | We use the Cora and Citeseer citation networks (Sen et al., 2008), the Coauthor-CS and Coauthor-Physics co-authorship networks, and the Amazon-Computers and Amazon-Photos co-purchase networks (Shchur et al., 2018). |
| Dataset Splits | Yes | We use an 85/5/10 split for training/validation/testing data following Zhang & Chen (2018); Cai et al. (2020). |
| Hardware Specification | Yes | We run all of our experiments on either NVIDIA P100 or V100 GPUs. We use machines with 12 virtual CPU cores and 24 GB of RAM for the majority of our experiments. We exclusively use V100s for our timing experiments. We ran our experiments on Google Cloud Platform. |
| Software Dependencies | No | The paper mentions "Py Torch-style pseudocode" and "Weights and Biases (Biewald, 2020) Bayesian optimizer" but does not provide specific version numbers for PyTorch or other software dependencies. |
| Experiment Setup | Yes | We run a Bayesian hyperparameter sweep for 25 runs across each model-dataset combination with the target metric being the validation Hits@50. Each run is the result of the mean averaged over 5 runs (retraining both the encoder and decoder). We provide a sample configuration file to reproduce our sweeps, as well as the exact parameters used for the top T-BGRL runs shown in our tables. |