Online Continuous Submodular Maximization: From Full-Information to Bandit Feedback

Authors: Mingrui Zhang, Lin Chen, Hamed Hassani, Amin Karbasi

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we propose three online algorithms for submodular maximization. The first one, Mono-Frank-Wolfe, reduces the number of per-function gradient evaluations from T 1/2 [18] and T 3/2 [17] to 1, and achieves a (1 1/e)-regret bound of O(T 4/5). The second one, Bandit-Frank-Wolfe, is the first bandit algorithm for continuous DR-submodular maximization, which achieves a (1 1/e)regret bound of O(T 8/9). Finally, we extend Bandit-Frank-Wolfe to a bandit algorithm for discrete submodular maximization, Responsive-Frank-Wolfe, which attains a (1 1/e)-regret bound of O(T 8/9) in the responsive bandit setting.
Researcher Affiliation Academia Department of Statistics and Data Science, Yale University Department of Electrical Engineering, Yale University Department of Electrical and Systems Engineering, University of Pennsylvania Department of Computer Science, Yale University {mingrui.zhang, lin.chen, amin.karbasi}@yale.edu hassani@seas.upenn.edu
Pseudocode Yes We present our proposed Mono-Frank-Wolfe algorithm in Algorithm 1. We describe our algorithm formally in Algorithm 2. We present Responsive-Frank-Wolfe in Algorithm 3.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets No The paper is theoretical and does not conduct empirical studies with datasets, therefore no public dataset access information is provided.
Dataset Splits No The paper is theoretical and does not involve empirical studies with data, so no training/validation/test dataset splits are discussed.
Hardware Specification No The paper is theoretical and does not include details on hardware used for experiments.
Software Dependencies No The paper is theoretical and does not provide specific ancillary software details with version numbers.
Experiment Setup No The paper is theoretical and does not include details about specific experimental setup, hyperparameters, or system-level training settings.