Non-monotone Submodular Maximization with Nearly Optimal Adaptivity and Query Complexity
Authors: Matthew Fahrbach, Vahab Mirrokni, Morteza Zadimoghaddam
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our empirical study, we use three real-world applications to compare our algorithm with several benchmarks for non-monotone submodular maximization. The results demonstrate that our algorithm finds competitive solutions using significantly fewer rounds and queries. |
| Researcher Affiliation | Collaboration | 1Georgia Institute of Technology 2Google. Correspondence to: Matthew Fahrbach <matthew.fahrbach@gatech.edu>. |
| Pseudocode | Yes | Algorithm 1 THRESHOLD-SAMPLING; Algorithm 2 UNCONSTRAINED-MAX; Algorithm 3 ADAPTIVE-NONMONOTONE-MAX |
| Open Source Code | No | The paper does not provide any information about open-sourcing the code for the described methodology or link to a repository. |
| Open Datasets | Yes | We perform our image summarization experiment on the CIFAR-10 test set (Krizhevsky & Hinton, 2009)... We randomly sample 500 movies from the Movie Lens 20M dataset (Harper & Konstan, 2016)... We consider the top 5,000 communities of the You Tube network (Leskovec & Krevl, 2014). |
| Dataset Splits | No | The paper mentions using a "subsampled ground set" (500 images) and using the "CIFAR-10 test set" but does not specify a training/validation/test split for any of the datasets used in experiments. For example, it does not state how the 500 subsampled images for CIFAR-10 were divided into training, validation, or testing sets. |
| Hardware Specification | No | The paper does not explicitly describe any specific hardware used to run the experiments (e.g., GPU/CPU models, memory). It only states that experiments were conducted. |
| Software Dependencies | No | The paper mentions using "SOFT-IMPUTE (Mazumder et al., 2010)" but does not provide a version number for it or any other software dependencies. |
| Experiment Setup | Yes | We set k = 80 in Figure 1a and track the progress of the algorithms in each round. Figure 1b compares the solution quality for different constraints k (20, 40, 60, 80, 100)... We use SOFT-IMPUTE (Mazumder et al., 2010)... and we define the similarity of two movies si,j as the inner product of the rating vectors for movies i and j. Following (Mirzasoleiman et al., 2016), we use the objective function with λ = 0.95... In Figure 1d we set k = 200, and in Figure 1e we consider k (50, 100, 150, 200, 250)... We use 10 trials for each stochastic algorithm and plot the mean and standard deviation of the solutions. |