Submodular Maximization Through Barrier Functions

Authors: Ashwinkumar Badanidiyuru, Amin Karbasi, Ehsan Kazemi, Jan Vondrak

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We extensively evaluate the performance of our proposed algorithm over several real-world applications, including a movie recommendation system, summarization tasks for You Tube videos, Twitter feeds and Yelp business locations, and a set cover problem.
Researcher Affiliation Collaboration Ashwinkumar Badanidiyuru Google ashwinkumarbv@gmail.com, Amin Karbasi Yale University amin.karbasi@yale.edu, Ehsan Kazemi Google ehsankazemi@google.com, Jan Vondrak Stanford University jvondrak@stanford.edu
Pseudocode Yes Algorithm 1 BARRIER-GREEDY
Open Source Code No The paper mentions a script for feature extraction for a specific application: 'Script is provided at https://github.com/vc1492a/Yelp-Challenge-Dataset.' (Section 5.2, footnote 5), but does not explicitly provide the source code for the core algorithms (BARRIER-GREEDY, BARRIER-GREEDY++) described in the paper.
Open Datasets Yes As our first application, we aim to summarize a collection of videos from VSUMM dataset [8]. In these experiments, we use the Yelp Academic dataset [47] which is a subset of Yelp s reviews, business descriptions and user data [48].
Dataset Splits No The paper describes experimental parameters and constraints for submodular maximization tasks but does not specify traditional training, validation, or test dataset splits, as the experiments involve selecting subsets from a given dataset rather than supervised model training.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as CPU/GPU models, memory, or cloud instance specifications.
Software Dependencies No The paper mentions using a 'pre-trained Res Net-18 model' and references a script for Yelp feature extraction, but it does not specify software dependencies like programming languages, libraries, or solvers with their respective version numbers used in their experiments.
Experiment Setup Yes We set ε to 0.1 in all experiments. In Figs. 1a and 1c, we set the maximum number of allowed frames from each video to mi = 10 and compare algorithms for varying values of knapsack budget. We also set λ = 1.0. In Figs. 2a and 2c, we evaluate the performance of algorithms for a varying knapsack budget. We set maximum cardinality of a feasible set to m = 30, the maximum number of allowed locations from each city to mi = 10 and λ to 1.0.