A Unified Approach for Maximizing Continuous DR-submodular Functions

Authors: Mohammad Pedramfar, Christopher Quinn, Vaneet Aggarwal

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper presents a unified approach for maximizing continuous DRsubmodular functions that encompasses a range of settings and oracle access types. Our approach includes a Frank-Wolfe type offline algorithm for both monotone and non-monotone functions, with different restrictions on the general convex set. We consider settings where the oracle provides access to either the gradient of the function or only the function value, and where the oracle access is either deterministic or stochastic. We determine the number of required oracle accesses in all cases. Our approach gives new/improved results for nine out of the sixteen considered cases, avoids computationally expensive projections in three cases, with the proposed framework matching performance of state-of-the-art approaches in the remaining four cases. Notably, our approach for the stochastic function valuebased oracle enables the first regret bounds with bandit feedback for stochastic DR-submodular functions. The paper presents theoretical algorithms, complexity analysis (e.g., O(1/ϵ) and O(T^3/4) bounds), and approximation guarantees, rather than empirical evaluation on datasets.
Researcher Affiliation Academia Mohammad Pedramfar Purdue University mpedramf@purdue.edu Christopher John Quinn Iowa State University cjquinn@iastate.edu Vaneet Aggarwal Purdue University vaneet@purdue.edu
Pseudocode Yes Algorithm 1 Black Box Gradient Estimate (BBGE), Algorithm 2 Generalized DR-Submodular Frank Wolfe, Algorithm 3 DR-Submodular Explore Then-Commit
Open Source Code No The paper does not provide any specific links or explicit statements about the availability of open-source code for the described methodology.
Open Datasets No The paper is theoretical and focuses on algorithms and complexity bounds for optimization problems. It does not mention the use of any specific datasets for training or evaluation.
Dataset Splits No The paper is theoretical and does not mention any training, validation, or test dataset splits.
Hardware Specification No The paper does not describe any specific hardware used to conduct its research or experiments.
Software Dependencies No The paper does not list any specific software dependencies with version numbers.
Experiment Setup No The paper focuses on theoretical algorithms and their analysis, and therefore does not provide specific experimental setup details such as hyperparameters or system-level training settings.