A Biased Graph Neural Network Sampler with Near-Optimal Regret
Authors: Qingru Zhang, David Wipf, Quan Gan, Le Song
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Experiments We describe the experiments to verify the effectiveness of Thanos and its improvement over Bandit Sampler in term of sampling approximation error and final practical performance. |
| Researcher Affiliation | Collaboration | Qingru Zhang1 David Wipf2 Quan Gan2 Le Song1,3 1Georgia Institute of Technology 2Amazon Shanghai AI Lab 3Mohamed bin Zayed University of Artificial Intelligence |
| Pseudocode | Yes | Algorithm 1 presents the condensed version of our proposed algorithm. See Algorithm 2 in Appendix B for the detailed version. |
| Open Source Code | No | The paper does not provide a specific repository link or explicit statement about the release of source code for the described methodology. |
| Open Datasets | Yes | We conduct node classification experiments on several benchmark datasets with large graphs: ogbnarxiv, ogbn-products [18], Cora Full [5], Chameleon [11] and Squirrel [27]. |
| Dataset Splits | Yes | Their detailed settings and dataset split are listed in Appendix. |
| Hardware Specification | No | The paper mentions 'Amazon Web Service for supporting the computational resources' but does not provide specific hardware details such as GPU or CPU models, or cloud instance types used for experiments. |
| Software Dependencies | No | The paper mentions 'Tensor Flow' in a footnote and discusses various models (GCN, GAT), but it does not provide specific version numbers for any software dependencies, libraries, or programming languages used. |
| Experiment Setup | Yes | The dimension of hidden embedding d is 16 for Chameleon and Squirrel, 256 for the others. The number of layer is fixed as 2. We set k = 3 for Cora Full; k = 5 for ogbn-arxiv, Chameleon, Squirrel; k = 10 for ogbn-products. We searched the learning rate among {0.001, 0.002, 0.005, 0.01} and found 0.001 optimal. And we set the penalty weight of l2 regularization 0.0005 and dropout rate 0.1. |