Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Non-monotone Submodular Maximization in Exponentially Fewer Iterations
Authors: Eric Balkanski, Adam Breuer, Yaron Singer
NeurIPS 2018 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Specifically, experiments on traffic monitoring and personalized data summarization applications show that the algorithm finds solutions whose values are competitive with state-of-the-art algorithms while running in exponentially fewer parallel iterations. |
| Researcher Affiliation | Academia | Eric Balkanski Harvard University EMAIL Adam Breuer Harvard University EMAIL Yaron Singer Harvard University EMAIL |
| Pseudocode | Yes | Algorithm 1 BLITS: the BLock ITeration Submodular maximization algorithm |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code for the described methodology or a direct link to a code repository. |
| Open Datasets | Yes | Traffic monitoring...using data from the Cal Trans Pe MS system [Cal]... Image summarization...10K Tiny Images dataset [KH09]... Movie recommendation...Movie Lens dataset [HK15]... Revenue maximization...You Tube social network [FHK15] |
| Dataset Splits | No | The paper describes the datasets used and the cardinality constraint 'k', but it does not specify explicit training, validation, or test dataset splits (e.g., percentages or sample counts) as would be typical for machine learning model evaluation. The experiments involve selecting subsets from a ground set based on a submodular function. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware (e.g., GPU/CPU models, memory specifications) used to run the experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiments. |
| Experiment Setup | Yes | for all experiments, we initialized BLITS to use only 30 samples of size k/r per round far fewer than the theoretical requirement necessary to fulfill its approximation guarantee. |