Neural Common Neighbor with Completion for Link Prediction

Authors: Xiyuan Wang, Haotong Yang, Muhan Zhang

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we extensively evaluate the performance of both NCN and NCNC. Detailed experimental settings are included in Appendix D.
Researcher Affiliation Academia Xiyuan Wang1,2 wangxiyuan@pku.edu.cn Haotong Yang1,2,3 haotongyang@pku.edu.cn Muhan Zhang1 muhan@pku.edu.cn 1Institute for Artificial Intelligence, Peking University. 2School of Intelligence Science and Technology, Peking University. 3Key Lab of Machine Perception (Mo E)
Pseudocode No The paper describes its methods using mathematical equations and textual explanations, but it does not include a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes Our code is available at https://github.com/Graph PKU/Neural Common Neighbor.
Open Datasets Yes We use seven popular real-world link prediction benchmarks. Among these, three are Planetoid citation networks: Cora, Citeseer, and Pubmed (Yang et al., 2016). Others are from Open Graph Benchmark (Hu et al., 2020): ogbl-collab, ogbl-ppa, ogbl-citation2, and ogbl-ddi.
Dataset Splits Yes Random splits use 70%/10%/20% edges for training/validation/test set respectively.
Hardware Specification Yes All experiments are conducted on an Nvidia 4090 GPU on a Linux server.
Software Dependencies No The paper mentions using 'Pytorch Geometric (Fey & Lenssen, 2019)', 'Pytorch (Paszke et al., 2019)', and 'optuna (Akiba et al., 2019)' but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes Training process. We utilize Adam optimizer to optimize models and set an epoch upper bound 100. All results of our models are provided from runs with 10 random seeds.