The Power of Randomization: Distributed Submodular Maximization on Massive Datasets
Authors: Rafael Barbosa, Alina Ene, Huy Nguyen, Justin Ward
ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our experiments, we demonstrate its efficiency in large problems with different kinds of constraints with objective values always close to what is achievable in the centralized setting. |
| Researcher Affiliation | Academia | Rafael Barbosa RAFAEL@DCS.WARWICK.AC.UK Department of Computer Science and DIMAP, University of Warwick Alina Ene A.ENE@DCS.WARWICK.AC.UK Department of Computer Science and DIMAP, University of Warwick Huy Le Nguyen HLNGUYEN@CS.PRINCETON.EDU Simons Institute, University of California, Berkeley Justin Ward J.D.WARD@DCS.WARWICK.AC.UK Department of Computer Science and DIMAP, University of Warwick |
| Pseudocode | Yes | Algorithm 1 The standard greedy algorithm GREEDY; Algorithm 2 The distributed algorithm RANDGREEDI |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the methodology described is openly available. |
| Open Datasets | Yes | In our experiments, we used a subset of the Tiny Images dataset consisting of 32 32 RGB images (Torralba et al., 2008)... We evaluated and compared the algorithms on the datasets used in (Kumar et al., 2013). |
| Dataset Splits | No | The paper does not specify exact percentages, sample counts, or refer to predefined splits for training, validation, or test sets. It mentions using datasets for experiments without detailing data partitioning. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory amounts, or cloud instance specifications used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software components, libraries, or solvers used in the experiments. |
| Experiment Setup | No | The paper does not provide specific hyperparameter values, training configurations, or detailed system-level settings for the experiments. It focuses on algorithmic evaluation rather than model training specifics. |