Bandit Online Learning on Graphs via Adaptive Optimization
Authors: Peng Yang, Peilin Zhao, Xin Gao
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on benchmark graph datasets show that the proposed bandit algorithm outperforms state-of-the-art competitors, even sometimes beats the algorithms using full information label feedback. |
| Researcher Affiliation | Collaboration | King Abdullah University of Science and Technology, Saudi Arabia 2South China University of Technology, China 3Tencent AI Lab, China |
| Pseudocode | Yes | Algorithm 1 MOLG-F: Adaptive Optimization for Online Learning on Graphs with Full Label Feedback; Algorithm 2 MOLG-B: Adaptive Optimization for Online Learning on Graphs with Bandit Feedback |
| Open Source Code | Yes | Proof. The proof is provided on the website1. 1https://github.com/Young Big Bird1985/MOLG/ |
| Open Datasets | Yes | We exploit 4 real-world graph datasets to evaluate all the algorithms: 1) Coauthor2 is a coauthor graph of the DBLP dataset... 2) Cora2 is a citation graph... 3) IMDB3 is an up-to-date movie dataset... 4) Pub Med4 is a graph... 2http://www.cs.umd.edu/ sen/lbc-proj/data/ 3http://www.imdb.com/ 4http://www.cs.umd.edu/projects/linqs/projects/lbc/ |
| Dataset Splits | No | The paper describes online learning where data is processed sequentially, and models are updated. However, it does not specify explicit train/validation/test splits with percentages or sample counts in the traditional sense for reproducibility of data partitioning. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models, processors, or memory used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies or version numbers required to replicate the experiments. |
| Experiment Setup | Yes | For both methods, we tune the parameter φ with the grid {10^-2, ..., 10}. For MOLG-B, we fix the exploration parameter ϕt = 0.05 for all t [T]. We set b = 10 for Cora and Coauthor and b = 100 for IMDB and Pub Med due to variable graph structures. Finally, we fix d = 100 for the dimension of low-rank representation. |