Pure Message Passing Can Estimate Common Neighbor for Link Prediction
Authors: Kaiwen Dong, Zhichun Guo, Nitesh Chawla
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on benchmark datasets from various domains, where our method consistently outperforms the baseline methods, establishing new state-of-the-arts.Our empirical investigations provide compelling evidence of MPLP s dominance. Benchmark tests reveal that MPLP not only holds its own but outstrips state-of-the-art models in link prediction performance. |
| Researcher Affiliation | Academia | 1Computer Science and Engineering, University of Notre Dame 2Lucy Family Institute for Data and Society, University of Notre Dame |
| Pseudocode | No | The paper describes methods using mathematical equations and textual explanations (e.g., Equation 3, Equation 5), but does not include a formally labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | Our code is publicly available at https://github.com/Barcavin/efficient-node-labelling. |
| Open Datasets | Yes | We conduct evaluations across a diverse spectrum of 15 graph benchmark datasets, which include 8 non-attributed and 7 attributed graphs. It also includes three datasets from OGB [10] with predefined train/test splits. ... USAir [44]: a graph of US airlines; NS [45]: a collaboration network of network science researchers; PB [46]: a graph of links between web pages on US political topics; Yeast [47]: a protein-protein interaction network in yeast; |
| Dataset Splits | Yes | In the absence of predefined splits, links are partitioned into train, validation, and test sets using a 70-10-20 percent split. |
| Hardware Specification | Yes | We run our experiments on a Linux system equipped with an NVIDIA A100 GPU with 80GB of memory. |
| Software Dependencies | No | The paper states 'We implement MPLP in Pytorch Geometric framework [52]' but does not provide specific version numbers for Pytorch Geometric or other software dependencies. |
| Experiment Setup | Yes | The chosen hyperparameters are as follows: Number of Hops (r): We set the maximum number of hops to r = 2. Node Signature Dimension (F): The dimension of node signatures, F, is fixed at 1024, except for Citation2 with 512. The minimum degree of nodes to be considered as hubs (b): We experiment with values in the set [50, 100, 150]. Batch Size (B): We vary the batch size depending on the graph type: For the 8 nonattributed graphs, we explore batch sizes within [512, 1024]. For the 4 attributed graphs coming from [51], we search within [2048, 4096]. For OGB datasets, we use 32768 for Collab and PPA, and 261424 for Citation2. |