Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
The Power of Randomization: Distributed Submodular Maximization on Massive Datasets
Authors: Rafael Barbosa, Alina Ene, Huy Nguyen, Justin Ward
ICML 2015 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our experiments, we demonstrate its ef๏ฌciency in large problems with different kinds of constraints with objective values always close to what is achievable in the centralized setting. |
| Researcher Affiliation | Academia | Rafael Barbosa EMAIL Department of Computer Science and DIMAP, University of Warwick Alina Ene EMAIL Department of Computer Science and DIMAP, University of Warwick Huy Le Nguyen EMAIL Simons Institute, University of California, Berkeley Justin Ward EMAIL Department of Computer Science and DIMAP, University of Warwick |
| Pseudocode | Yes | Algorithm 1 The standard greedy algorithm GREEDY; Algorithm 2 The distributed algorithm RANDGREEDI |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the methodology described is openly available. |
| Open Datasets | Yes | In our experiments, we used a subset of the Tiny Images dataset consisting of 32 32 RGB images (Torralba et al., 2008)... We evaluated and compared the algorithms on the datasets used in (Kumar et al., 2013). |
| Dataset Splits | No | The paper does not specify exact percentages, sample counts, or refer to predefined splits for training, validation, or test sets. It mentions using datasets for experiments without detailing data partitioning. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory amounts, or cloud instance specifications used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software components, libraries, or solvers used in the experiments. |
| Experiment Setup | No | The paper does not provide specific hyperparameter values, training configurations, or detailed system-level settings for the experiments. It focuses on algorithmic evaluation rather than model training specifics. |