Networked Restless Bandits with Positive Externalities
Authors: Christine Herlihy, John P. Dickerson
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical results demonstrate that GRETA outperforms comparison policies across a range of hyperparameter values and graph topologies. |
| Researcher Affiliation | Academia | Christine Herlihy, John P. Dickerson Department of Computer Science University of Maryland, College Park College Park, MD, USA cherlihy@umd.edu, johnd@umd.edu |
| Pseudocode | Yes | Algorithm 1: Compute Whittle indices for V A \ {0} (...) Algorithm 2: GRETA: graph-aware, Whittle-based heuristic (...) Algorithm 3: Compute cost to pull u and message v (...) Algorithm 4: Cumulative subsidy of max pull-message set (...) Algorithm 5: Compute edge index values |
| Open Source Code | Yes | Code and appendices are available at https://github.com/crherlihy/networked_restless_bandits. |
| Open Datasets | No | The paper generates synthetic data: 'We consider a synthetic cohort of n = 100 restless arms whose transition matrices are randomly generated in such a way so as to satisfy the structural constraints introduced in Section 2. We use a stochastic block model (SBM) generator with pin = 0.2 and pout = 0.05, and consider both the random and by cluster options for φ.' It does not use a publicly available, pre-existing dataset with concrete access information. |
| Dataset Splits | No | The paper performs experiments on synthetic data but does not specify train/validation/test splits with percentages, sample counts, or references to predefined splits for reproducibility. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments. It only mentions general experimental setup. |
| Software Dependencies | No | The paper mentions using a 'K-MEANS algorithm' and cites 'Scikit-learn: Machine Learning in Python' (Pedregosa et al. 2011), but it does not provide specific version numbers for these or any other software components used in the experiments. |
| Experiment Setup | Yes | Figure 1 reports results for a synthetic cohort of 8 arms embedded in a fully connected graph (i.e., pin = pout = 1.0). We let T = 120, ψ = 0.5, and report unnormalized Eπ[R], along with margins of error for 95% confidence intervals computed over 50 simulation seeds for values of B {1, 1.5, 2, 2.5, 3}. (...) We let T = 120, B = 10, and ψ = 0.5. (...) We hold message cost fixed at ψ = 0.5, let pin = 0.25, pout = 0.05, and consider values of B {5%, 10%, 15%} of n. (...) Here, we hold the budget fixed at 6, let pin = 0.25, pout = 0.05, and consider values of ψ {0.0, 0.25, 0.5, 0.75, 0.9}. |