Link Prediction Based on Graph Neural Networks

Authors: Muhan Zhang, Yixin Chen

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments to evaluate SEAL. Our results show that SEAL is a superb and robust framework for link prediction, achieving unprecedentedly strong performance on various networks.
Researcher Affiliation Academia Muhan Zhang Department of CSE Washington University in St. Louis muhan@wustl.edu Yixin Chen Department of CSE Washington University in St. Louis chen@cse.wustl.edu
Pseudocode No The paper does not include any explicit pseudocode blocks or algorithm listings.
Open Source Code Yes The code and data are available at https://github.com/muhanzhang/SEAL.
Open Datasets Yes The code and data are available at https://github.com/muhanzhang/SEAL. ... Datasets The eight datasets used are: USAir, NS, PB, Yeast, C.ele, Power, Router, and E.coli (please see Appendix C for details).
Dataset Splits Yes We randomly remove 10% existing links from each dataset as positive testing data. ... We use the remaining 90% existing links as well as the same number of additionally sampled nonexistent links to construct the training data. ... If the second-order heuristic AA outperforms the first-order heuristic CN on 10% validation data, then we choose h = 2; otherwise we choose h = 1.
Hardware Specification No The paper does not specify any hardware details (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper mentions: 'Thus, we choose a recent architecture DGCNN [17] as the default GNN, and node2vec [20] as the default embeddings.' However, no specific version numbers for these software components or other libraries are provided.
Experiment Setup Yes Here, we select h only from {1, 2}, since on one hand we empirically verified that the performance typically does not increase after h 3, which validates our theoretical results that the most useful information is within local structures. ... The selection principle is very simple: If the second-order heuristic AA outperforms the first-order heuristic CN on 10% validation data, then we choose h = 2; otherwise we choose h = 1. For datasets PB and E.coli, we consistently use h = 1 to fit into the memory.