Graph-Structured Gaussian Processes for Transferable Graph Learning
Authors: Jun Wu, Lisa Ainsworth, Andrew Leakey, Haixun Wang, Jingrui He
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on several transferable graph learning benchmarks demonstrate the efficacy of Graph GP over state-of-the-art Gaussian process baselines. |
| Researcher Affiliation | Collaboration | 1University of Illinois at Urbana-Champaign 2USDA ARS Global Change and Photosynthesis Research Unit 3Instacart |
| Pseudocode | Yes | Algorithm 1 Graph GP |
| Open Source Code | Yes | Code is available at https://github.com/jwu4sml/Graph GP. |
| Open Datasets | Yes | Twitch [44]: It has 6 different domains (...). Agriculture [34, 60]: It has 3 different domains (...). Airports [43]: It has 3 different domains (...). Wikipedia [44]: It has 3 different domains (...). Web KB [41]: It has 3 different domains (...). |
| Dataset Splits | Yes | For Airport, Wikipedia, and Web KB data sets, we randomly select 10% of target nodes for the training set, 10% for the validation set, and 80% for the testing set. For Agriculture and Twitch data sets, we randomly select 1% of target nodes for the training set, 1% for the validation set, and 98% for the testing set. |
| Hardware Specification | Yes | All the experiments are performed on a Windows machine with four 3.80GHz Intel Cores, 64GB RAM, and two NVIDIA Quadro RTX 5000 GPUs. |
| Software Dependencies | No | The paper mentions using "GPy Torch [16]" and "Adam [24]" but does not provide specific version numbers for these software dependencies, which is required for reproducibility. |
| Experiment Setup | Yes | The hyperparameters are optimized using Adam [24] with a learning rate of 0.01 and a total number of training epochs of 500. |