A Multi-Scale Approach for Graph Link Prediction

Authors: Lei Cai, Shuiwang Ji3308-3315

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results on 14 datasets from different areas demonstrate that our proposed method outperforms the state-of-the-art methods by employing multi-scale graphs without additional parameters.
Researcher Affiliation Academia Lei Cai,1 Shuiwang Ji2 1Washington State University, 2Texas A&M University
Pseudocode No The paper includes a diagram (Figure 1) of the proposed method but no structured pseudocode or algorithm blocks.
Open Source Code Yes The code and datasets are attached in supplementary.
Open Datasets Yes In the experiments, we evaluate our proposed method on 14 datasets including BUP, C.ele, USAir, SMG, EML, NSC, YST, Power, KHN, ADV, GRQ, LDG, HPD, ZWL (Watts and Strogatz 1998; Newman 2001). The code and datasets are attached in supplementary.
Dataset Splits No The paper specifies training and test splits ("50% and 80% existing links from each graph as positive training samples. The remaining 50% and 20% links are used as positive test samples"), but no explicit validation split percentage or description is provided.
Hardware Specification No No specific hardware details such as GPU models, CPU types, or memory specifications used for running experiments are provided in the paper.
Software Dependencies No The paper mentions models like DGCNN and standard methods (Katz, Page Rank, Sim Rank, node2vec) and general components (graph convolution layers, 1-D convolution layers) but does not list specific software libraries or their version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes The number of channel for four graph convolution layers is set to 32, 32, 32, 1. The ratio of sort pooling layer is set to 0.6. The classification network consists of two 1-D convolution layers and a fully connected layer. The number of channel for two 1-D convolution layers is set to 16, 32. We train the DGCNN network for 50 epochs. The hop number h is set to 2 for those graphs. The damping factor in the Katz method is set to 0.001. The damping factor in the Page Rank (PR) is set to 0.85. The constant factor in the Sim Rank (SR) is set to 0.8. For node2vec, we use 128-dimensional embeddings from the software with default parameters.