Learning from Counterfactual Links for Link Prediction
Authors: Tong Zhao, Gang Liu, Daheng Wang, Wenhao Yu, Meng Jiang
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on benchmark data show that our graph learning method achieves state-of-the-art performance on the task of link prediction. |
| Researcher Affiliation | Academia | 1Department of Computer Science and Engineering, University of Notre Dame, IN, USA. Correspondence to: Tong Zhao <tzhao2@nd.edu>. |
| Pseudocode | Yes | Algorithm 1 summarizes the whole process of CFLP. |
| Open Source Code | Yes | Source code of the proposed CFLP method is publicly available at https://github.com/DM2-ND/CFLP. |
| Open Datasets | Yes | We conduct experiments on five benchmark datasets including citation networks (CORA, CITESEER, PUBMED (Yang et al., 2016)), social network (FACEBOOK (Mc Auley & Leskovec, 2012)), and drug-drug interaction network (OGB-DDI (Wishart et al., 2018)) from the Open Graph Benchmark (OGB) (Hu et al., 2020). (Section 4.1) All the datasets used in this work are publicly available. (Appendix A) |
| Dataset Splits | Yes | For the first four datasets, we randomly select 10%/20% of the links and the same numbers of disconnected node pairs as validation/test samples. The links in the validation and test sets are masked off from the training graph. For OGB-DDI, we used the OGB official train/validation/test splits. |
| Hardware Specification | Yes | All the experiments in this work were conducted on a Linux server with Intel Xeon Gold 6130 Processor (16 Cores @2.1Ghz), 96 GB of RAM, and 4 RTX 2080Ti cards (11 GB of RAM each). |
| Software Dependencies | Yes | Our method are implemented with Python 3.8.5 with Py Torch. (Appendix B) We implement the GNN encoders with torch_geometric (Fey & Lenssen, 2019). |
| Experiment Setup | Yes | We use the Adam optimizer with a simple cyclical learning rate scheduler (Smith, 2017)... We manually tune the following hyperparameters over range: lr {0.005, 0.01, 0.05, 0.1, 0.2}, α {0.001, 0.01, 0.1, 1, 2}, β {0.001, 0.01, 0.1, 1, 2}, γpct {10, 20, 30}. |