Unified Projection-Free Algorithms for Adversarial DR-Submodular Optimization

Authors: Mohammad Pedramfar, Yididiya Y. Nadew, Christopher John Quinn, Vaneet Aggarwal

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test our online continuous DR-submodular maximization algorithms for non-monotone objectives, a downward-closed feasible region, and both full-information and semi-bandit gradient feedback. We briefly describe the setup and highlight key results. See Appendix H for more details. We use online non-convex/non-concave non-monotone quadratic maximization following (Bian et al., 2017a; Chen et al., 2018b; Zhang et al., 2023), randomly generating linear inequalities to form a downward closed feasible region and for each round t we generate a quadratic function Ft(x) = 1/2x Hx + h x + c. Similar to (Zhang et al., 2023), we considered three pairs (n, m) of dimensions n and number of constraints m, {(25, 15), (40, 20), (50, 50)}. (...) Figure 2 shows both averaged regret within runs for a fixed horizon (top row) and cumulative regret for different horizons, averaged over 10 independent runs.
Researcher Affiliation Academia Mohammad Pedramfar School of Industrial Engineering Purdue University West Lafayette, IN 47907, USA mpedramf@purdue.edu Yididiya Y. Nadew Department of Computer Science Iowa State University Ames, IA 50010, USA yididiya@iastate.edu Christopher J. Quinn Department of Computer Science Iowa State University Ames, IA 50010, USA cjquinn@iastate.edu Vaneet Aggarwal School of Industrial Engineering Purdue University West Lafayette, IN 47907, USA vaneet@purdue.edu
Pseudocode Yes Algorithm 1: Generalized Meta-Frank Wolfe and Algorithm 2: Generalized (Semi-)Bandit-Frank Wolfe
Open Source Code Yes Source code for our algorithms is available at https://github.com/yididiyan/ unified-dr-submodular.
Open Datasets No We use online non-convex/non-concave non-monotone quadratic maximization... randomly generating linear inequalities to form a downward closed feasible region and for each round t we generate a quadratic function Ft(x) = 1/2x Hx + h x + c. The paper describes synthetic data generation for its experiments, rather than using a publicly available dataset with specific access information.
Dataset Splits No The paper describes an online learning setting where functions are generated per round, rather than using fixed training, validation, and test splits from a static dataset. Therefore, specific dataset split information is not provided in the traditional sense.
Hardware Specification Yes For the experiments we ran to generate Fig. 2, we used a compute cluster. The nodes had AMD EPYC 7543P processors. Individual jobs used 8 cores and 4 GB of memory. For the running times reported in Table 3, we conducted the experiments on a single desktop machine with a 12-core AMD Ryzen Threadripper 1920X processor and 64GB of memory.
Software Dependencies Yes We used Python (v3.8.10) and CVXOPT (v1.3.0), a free software package for convex optimization.
Experiment Setup Yes We use online non-convex/non-concave non-monotone quadratic maximization... randomly generating linear inequalities to form a downward closed feasible region and for each round t we generate a quadratic function Ft(x) = 1/2x Hx + h x + c. Here, the coefficient matrix H is a symmetric matrix whose entries Hi,j are drawn from a uniform distribution in [ 10, 0]... To make Ft non-monotone, we set h = 0.1 H u , where u Rn. Similar to (Zhang et al., 2023), we set u = 1. In addition, to ensure non-negativity, the constant term is set to c = 0.5 P... Like Zhang et al. (2023), we considered three pairs (n, m) of dimensions n and number of constraints m, {(25, 15), (40, 20), (50, 50)}. For a gradient query Ft(x), we generate a random noise vector nt N(0, 1) and return the noisy gradient e Ft(x) = Ft(x) + ϵ nt nt , with a noise scale ϵ = 0.1. We vary horizons between T = 20 and T = 500.