Greed is Still Good: Maximizing Monotone Submodular+Supermodular (BP) Functions
Authors: Wenruo Bai, Jeff Bilmes
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5. Computational Experiments We empirically test our guarantees for BP maximization subject to a cardinality constraint on contrived functions using GREEDMAX and Semi Grad. |
| Researcher Affiliation | Academia | 1Department of Electrical Engineering, University of Washington, Seattle, USA 2Department of Computer Science and Engineering, University of Washington, Seattle, USA. |
| Pseudocode | Yes | Algorithm 1: GREEDMAX for BP maximization |
| Open Source Code | No | The paper does not contain an explicit statement or link indicating the release of source code for the described methodology. |
| Open Datasets | No | For the first experiment, we let |V | = 20 set the cardinality constraint to k = 10, and partition the ground set into |V1| = |V2| = k, V1 V2 = V where V1 = {v1, v2, . . . , vk}. Let wi = 1 α h 1 α k i+1i for i = 1, 2, . . . , k. Then we define the submodular and supermodular functions as follows, f(X) = h k α|X V2| {i:vi X} wi + |X V2| g(X) = |X| β min(1 + |X V1|, |X|, k) + ϵ max(|X|, |X| + β 1 β (|X V2| k + 1)) and h(X) = λf(X)+(1 λ)g(X) for 0 α, β, λ 1 and ϵ = 1 10 5. |
| Dataset Splits | No | No explicit mention of specific dataset split percentages, sample counts, or references to predefined splits for training, validation, or test sets. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware used for computational experiments. |
| Software Dependencies | No | The paper does not provide specific details about ancillary software, including names and version numbers. |
| Experiment Setup | Yes | For the first experiment, we let |V | = 20 set the cardinality constraint to k = 10, and partition the ground set into |V1| = |V2| = k, V1 V2 = V where V1 = {v1, v2, . . . , vk}. Let wi = 1 α h 1 α k i+1i for i = 1, 2, . . . , k. Then we define the submodular and supermodular functions as follows, f(X) = h k α|X V2| {i:vi X} wi + |X V2| g(X) = |X| β min(1 + |X V1|, |X|, k) + ϵ max(|X|, |X| + β 1 β (|X V2| k + 1)) and h(X) = λf(X)+(1 λ)g(X) for 0 α, β, λ 1 and ϵ = 1 10 5. Immediately, we notice that κf = α and κg = β. In particular, we choose α, β, λ = 0, 0.01, 0.02, . . . , 1 and for all cases, we normalize h(X) using either exhaustive search so that OPT = h(X ) = 1. |