The FAST Algorithm for Submodular Maximization

Authors: Adam Breuer, Eric Balkanski, Yaron Singer

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that FAST is orders of magnitude faster than any algorithm for submodular maximization we are aware of, including hyper-optimized parallel versions of state-of-the-art serial algorithms, by running experiments on large data sets.
Researcher Affiliation Academia 1Harvard University, Cambridge, MA. Correspondence to: Adam Breuer <breuer@g.harvard.edu>, Eric Balkanski <ebalkans@gmail.com>, Yaron Singer <yaron@seas.harvard.edu>.
Pseudocode Yes Algorithm 1 FAST-FULL: the full algorithm
Open Source Code Yes Code is available from www.adambreuer.com/code.
Open Datasets Yes To compare the algorithms runtimes on real data, we optimize Sensor Placement on California roadway traffic data; Movie Recommendation on Movie Lens data; Revenue Maximization on You Tube Network data; and Influence Maximization on Facebook Network data. See Appendix C.4.3 for additional details.
Dataset Splits No No specific details on dataset splits (e.g., train/validation/test percentages or counts) are provided in the main text.
Hardware Specification Yes We then use 95 Intel Skylake-SP 3.1 GHz processors on AWS to compare the algorithms runtime over a variety of objectives defined on 8 real and synthetic datasets.
Software Dependencies No The paper mentions 'optimized parallel MPI implementations' but does not provide specific version numbers for MPI or any other software dependencies.
Experiment Setup Yes To obtain these values, we set all algorithms to guarantee a 1 1/e 0.1 approximation with probability 0.95 except LTLG, which has this guarantee in expectation, and we set k = 200.