Graph Neural Network Bandits

Authors: Parnian Kassraie, Andreas Krause, Ilija Bogunovic

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experiments We create synthetic datasets which may be of independent interest and can be used for evaluating and benchmarking machine learning algorithms on graph domains. Each dataset is constructed from a finite graph domain together with a reward function. ... Regret Experiments. We assess the performance of the algorithms on bandit optimization tasks over different domains. In Figure 1, we show the inference cumulative regret ... Figure 1 presents the results: GNN-PE consistently outperforms the other methods.
Researcher Affiliation Academia Parnian Kassraie ETH Zurich pkassraie@ethz.ch Andreas Krause ETH Zurich krausea@ethz.ch Ilija Bogunovic University College London i.bogunovic@ucl.ac.uk
Pseudocode Yes We introduce GNN-Phased Elimination (GNN-PE; see Algorithm 1) that consists of episodes of pure exploration over a set of plausible maximizer graphs, similar to [7, 30].
Open Source Code Yes 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] Appendix D.2 includes the practical details and the instructions required to produce the results.
Open Datasets Yes We create synthetic datasets which may be of independent interest and can be used for evaluating and benchmarking machine learning algorithms on graph domains. ... instruction to create similar random datasets is also given.
Dataset Splits No The paper describes generating synthetic datasets for bandit problems, where data is observed sequentially. It does not specify fixed train/validation/test splits with percentages or counts in the traditional supervised learning sense.
Hardware Specification No The paper states: 'About 5 hour per CPU core on 100 internal cluster nodes.' This indicates the type of resource (CPU core, internal cluster nodes) but lacks specific model numbers for CPUs, GPUs, or detailed memory specifications.
Software Dependencies No The paper states: 'We use the Neural Tangents and Py Torch libraries. Both cited.' However, no specific version numbers for these libraries or other software dependencies are provided.
Experiment Setup Yes Experiment Setup. ... We always set width m = 2048 and layers L = 2, for every type of network architecture. ... To configure these algorithms, we only tune λ and β = βt... The number of gradient descent steps J is chosen adaptively, so that it depends on the number of observations, i.e., J = 10 2 log(t + 1). We use Adam optimizer with default parameters, and set the learning rate η to 5 10 3. The batch size is set to 256.