Using Partial Monotonicity in Submodular Maximization

Authors: Loay Mualem, Moran Feldman

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we discuss two concrete applications with non-monotone submodular objective functions. For each application we provide a lower bound on the monotonicity ratio m of the objective function, which translates via our results from the previous sections into an improved approximation guarantee for the application. To demonstrate the value of our improved guarantees in experiments, we took the following approach.
Researcher Affiliation Academia Loay Mualem Department of Computer Science University of Haifa Haifa 3303221, Israel loaymua@gmail.com Moran Feldman Department of Computer Science University of Haifa Haifa 3303221, Israel moranfe@cs.haifa.ac.il
Pseudocode Yes In Appendix D.1 we prove Theorem 4.1, which generalizes the result of [44], and proves an approximation guarantee for the greedy algorithm that deteriorates gracefully with the monotonicity ratio m. Theorem 4.1. The Greedy algorithm (Algorithm 2) has an approximation ratio of at least m(1 1/e) for the problem of maximizing a non-negative m-monotone submodular function subject to a cardinality constraint.
Open Source Code No Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No]
Open Datasets Yes To demonstrate the value of our lower bound on the monotonicity ratio, we followed the experimental setup of [40] and used a subset of movies from the Movie Lens data set [28] which includes 10,437 movies.
Dataset Splits No The paper mentions using the Movie Lens dataset [28] and discusses data processing ("Each movie in this data set is represented by a 25 dimensional feature vector calculated using user ratings"), but does not provide specific percentages or counts for training, validation, or test splits in the main body.
Hardware Specification No Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No]
Software Dependencies No This value can be approximately obtained by using QUADPROGIP5 [53]. We have used IBM CPLEX optimization studio https://www.ibm.com/products/ilog-cplex-optimization-studio.
Experiment Setup Yes Each point in the plots represents the average value of the outputs of 10 executions of these algorithms. In Figure 2a we plot these values for the case in which we asked the algorithms to pick at most 10 movies, and we vary the parameter λ. Following Bian et al. [2], we set m = n, choose the matrix H Rn n to be a randomly generated symmetric matrix whose entries are drawn uniformly at random (and independently) from [ 1, 0], and choose A Rm n to be a randomly generated matrix whose entries are drawn uniformly at random from [v, v + 1] for v = 0.01 (this choice of v guarantees that the entries of A are strictly positive). We also set b = 1 (i.e., b is the all ones vector), and u to be the upper bound on P given by uj = minj [m] bi/Ai,j for every j [n]. Finally, we set h = β HT u for a parameter β > 0.