Nearly Minimax Optimal Submodular Maximization with Bandit Feedback
Authors: Artin Tajdini, Lalit Jain, Kevin G. Jamieson
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | For the experiments we compare SUB-UCB (l) for different greedy stop levels l, SUB-UCB (k i ) which selects the best stop level based on the regret analysis, the ETCG (explore-then-commit greedy) algorithm from [28], and UCB on all size k arms. Each arm pull has a 1-Gaussian noise, with 50 trials for each setting. The expected reward functions are the following. Figure 1: Regret comparison for weighted set cover with n = 15 and k = 4 Figure 2: Comparison between all SUB-UCB greedy stop cardinality choices for the unique greedy path function with n = 20 and k = 5. |
| Researcher Affiliation | Academia | Artin Tajdini, Lalit Jain, Kevin Jamieson University of Washington, Seattle, WA {artin, jamieson}@cs.washington.edu, lalitj@uw.edu |
| Pseudocode | Yes | Algorithm 1 SUB-UCB algorithm for set bandits with cardinality constraints |
| Open Source Code | No | The paper states: "Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [No] Justification: This paper is a theoretical work, and the experiments apply the proposed algorithm in main text on synthetic functions which are fully defined in the main text. There are no datasets used, and no code beyond the main algorithm implementation." |
| Open Datasets | No | The paper defines the functions used for experiments directly within the text (e.g., "The Unique greedy path hard instance" and "Weighted set cover function"). It does not use or provide access to any external public datasets. |
| Dataset Splits | No | The paper mentions "50 trials for each setting" but does not specify any dataset splits (training, validation, test) as the data is generated on-the-fly by defined functions during the experiments. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments. The checklist states: "Justification: There are no experiments that need anything more than a laptop." |
| Software Dependencies | No | The paper does not mention any specific software dependencies or version numbers (e.g., programming languages, libraries, frameworks, or solvers) used for implementation or experiments. |
| Experiment Setup | Yes | For the experiments we compare SUB-UCB (l) for different greedy stop levels l, SUB-UCB (k i ) which selects the best stop level based on the regret analysis, the ETCG (explore-then-commit greedy) algorithm from [28], and UCB on all size k arms. Each arm pull has a 1-Gaussian noise, with 50 trials for each setting. The expected reward functions are the following. The Unique greedy path hard instance i.e. (P|S| i=1 1 k+i S = {1, . . . , |S|} P|S| i=1 1 k+i + 1 100 S = {1, . . . , |S|}. Weighted set cover function i.e. f C(S) = P C C w(C)1{S C = } for a partition C of [n] and weight function w on the partition. |