PageRank Bandits for Link Prediction
Authors: Yikun Ban, Jiaru Zou, Zihao Li, Yunzhe Qi, Dongqi Fu, Jian Kang, Hanghang Tong, Jingrui He
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we extensively evaluate PRB in both online and offline settings, comparing it with bandit-based and graph-based methods. The empirical success of PRB demonstrates the value of the proposed fusion approach. |
| Researcher Affiliation | Collaboration | 1University of Illinois Urbana-Champaign, 2Meta AI, 3University of Rochester 1{yikunb2, jiaruz2, zihaoli5, yunzheq2, htong, jingrui}@illinois.edu 2dongqifu@meta.com, 3jian.kang@rochester.edu |
| Pseudocode | Yes | Algorithm 1 PRB (Page Rank Bandits) ... Algorithm 2 PRB-N (Node Classification) ... Algorithm 3 PRB-Greedy |
| Open Source Code | Yes | Our code is released at https://github.com/jiaruzouu/PRB |
| Open Datasets | Yes | We use three categories of real-world datasets to compare PRB with bandit-based baselines. The details and experiment settings are as follows. (1) Recommendation datasets: Movielens [32] and Amazon Fashion [48]... (2) Social network datasets: Facebook [38] and GR-QC [37]... (3) Node classification datasets: Cora, Citeseer, and Pubmed from the Planetoid citation networks [68]... Specifically, we apply Cora, Citeseer, and Pubmed from Planetoid citation networks [68]; ogbl-collab, ogbl-ppa, and ogbl-ddi from Open Graph Benchmark [33]. |
| Dataset Splits | Yes | Offline Link Prediction. In this setting, both training and testing data are provided, following the typical supervised learning process. ... We then use the trained model to perform link prediction on the testing data... Splits random random random fixed fixed fixed... Random splits use 70%,10%, and 20% edges for training, validation, and test set respectively. |
| Hardware Specification | Yes | We conduct all of our experiments on an Nvidia 3060 GPU with an x64-based processor. |
| Software Dependencies | No | The paper mentions optimizers such as 'SGD as the optimizer' and 'Adam optimizer', and implicitly uses deep learning frameworks. However, it does not provide specific version numbers for these software dependencies (e.g., 'PyTorch 1.9' or 'Python 3.8'). |
| Experiment Setup | Yes | For all bandit-based methods including PRB, for fair comparison, the exploitation network f1 is built by a 2-layer fully connected network with 100-width. For the exploration network of EE-Net and PRB, we use a 2-layer fully connected network with 100-width as well. ... we conduct the grid search for the exploration parameter ν over {0.001, 0.01, 0.1, 1} and for the regularization parameter λ over {0.01, 0.1, 1}. ... For all neural networks, we conduct the grid search for learning rate over {0.01, 0.001, 0.0005, 0.0001}. For PRB, we strictly follow the settings in [42] to implement the Page Rank component. Specifically, we set the parameter α = 0.85 after grid search over {0.1, 0.3, 0.5, 0.85, 0.9}, and the terminated accuracy ϵ = 10^-6. ... We set the training epoch to 100 and evaluate the model performance on validation and test datasets. |