Fast Adaptive Non-Monotone Submodular Maximization Subject to a Knapsack Constraint

Authors: Georgios Amanatidis, Federico Fusco, Philip Lazos, Stefano Leonardi, Rebecca Reiffenhäuser

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental evaluation of our algorithms showcases their improved performance on real and synthetic data. ... In Section 5 we showcase the fact that our theoretical results indeed translate into applied performance. We focus on two applications that fit within the framework of non-monotone submodular maximization subject to a knapsack constraint, namely video recommendation and influence-and-exploit marketing. We run experiments on real and synthetic data that indicate that SAMPLEGREEDY consistently performs better than FANTOM while being much faster.
Researcher Affiliation Academia Georgios Amanatidis University of Essex & University of Amsterdam georgios.amanatidis@essex.ac.uk Federico Fusco Sapienza University of Rome fuscof@diag.uniroma1.it Philip Lazos Sapienza University of Rome lazos@diag.uniroma1.it Stefano Leonardi Sapienza University of Rome leonardi@diag.uniroma1.it Rebecca Reiffenhäuser Sapienza University of Rome rebeccar@diag.uniroma1.it
Pseudocode Yes The pseudo-code for the resulting algorithm, ADAPTIVEGREEDY, is given below.
Open Source Code No The paper does not provide any explicit links or statements about the availability of its source code for the described methods.
Open Datasets Yes We evaluate SAMPLEGREEDY on an instance based on the latest version of the Movie Lens dataset [31]... [31] F. M. Harper and J. A. Konstan. The Movie Lens datasets: History and context. ACM Trans. Interact. Intell. Syst., 5(4), Dec. 2015. ... We evaluate ADAPTIVEGREEDY on an instance based on the You Tube graph [50]... [50] J. Yang and J. Leskovec. Defining and evaluating network communities based on ground-truth. Knowledge and Information Systems, 42(1):181 213, 2015.
Dataset Splits No The paper mentions using Movie Lens and You Tube graph datasets but does not specify any training, validation, or test splits (e.g., percentages, absolute counts, or cross-validation setup).
Hardware Specification No The paper describes the performance and speed of algorithms but does not specify any hardware details such as GPU/CPU models, memory, or computing infrastructure used for the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks) used for the experiments.
Experiment Setup Yes A delicate point is tuning the probabilities of acceptance p for improved performance. ... we guess a good value for it by running our algorithms 5 times, each with a p chosen uniformly at randomly from [0.9, 1]. ... We use λ = 3, µ = 7 and set the parameters α, β so that the two terms are of comparable size. ... We assume that the valuation function of each buyer i is of the form fi(x) = ai x where ai is drawn independently from a Pareto Type II distribution with λ = 1, α = 2.