Mixture of Link Predictors on Graphs

Authors: Li Ma, Haoyu Han, Juanhui Li, Harry Shomer, Hui Liu, Xiaofeng Gao, Jiliang Tang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results across diverse real-world datasets demonstrate substantial performance improvement from Link-Mo E.
Researcher Affiliation Academia Li Ma1* , Haoyu Han2* , Juanhui Li2 , Harry Shomer2 , Hui Liu2 , Xiaofeng Gao1 , Jiliang Tang2 1Shanghai Jiao Tong University, 2Michigan State University mali-cs@sjtu.edu.cn, gao-xf@cs.sjtu.edu.cn {hanhaoy1,lijuanh1,shomerha,liuhui7,tangjili}@msu.edu
Pseudocode Yes In this work, we employ a two-step training strategy, as detailed in Algorithm 1 in Appendix E.
Open Source Code Yes The code is available at https://github.com/ml-ml/Link-Mo E/.
Open Datasets Yes We evaluate our proposed method on eight datasets including homophilous graphs: Cora, Citeseer, Pubmed [46], ogbl-ppa, ogbl-collab, and ogbl-citation2 [47] and heterophilic graphs: Chameleon and Squirrel [48].
Dataset Splits Yes We adopt the single fixed train/validation/test split with percentages 85/5/10% for Cora, Citeseer, and Pubmed as used in [53]. For the OGB datasets, we use the fixed splits provided by the OGB benchmark [54].
Hardware Specification Yes We train both experts and gating models on NVIDIA RTX A6000 GPU with 48GB memory.
Software Dependencies No The loss is optimized using the Adam optimizer [55].
Experiment Setup Yes The loss is optimized using the Adam optimizer [55]. At first, we train all of the expert models by using the hyperparameters suggested in this repository *. We then do the inference to obtain the prediction score for each link. Secondly, in order to train Link-Mo E, we split the original validation dataset into a new training set and validation set. Thirdly, we train the gating model until it converges and choose the model weights associated with the best validation performance. ... The hyperparameter ranges are shown in Table 4.