The Power of Adaptivity for Stochastic Submodular Cover
Authors: Rohan Ghuge, Anupam Gupta, Viswanath Nagarajan
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on synthetic and real datasets show qualitative improvements in the solutions as we allow more rounds of adaptivity; in practice, solutions with a few rounds of adaptivity are nearly as good as fully adaptive solutions. |
| Researcher Affiliation | Academia | 1Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, USA. 2Department of Computer Science, Carnegie Mellon University, Pittsburgh, USA. |
| Pseudocode | Yes | Algorithm 1 PARtial Covering Algorithm PARCA(X, f, Q, δ) |
| Open Source Code | No | The paper does not provide any explicit statements about open-source code availability or links to code repositories for the described methodology. |
| Open Datasets | Yes | We use the Epinions network and a Facebook messages dataset described in (Rossi & Ahmed, 2015) to generate instances of the stochastic set cover problem. and We use a real-world dataset called WISER 2 for our experiments. ... This dataset has been used for evaluating algorithms for similar problems in other papers (Bhavnani et al., 2007; Bellala et al., 2011; Bellala et al., 2012; Navidi et al., 2020). |
| Dataset Splits | No | The paper describes sampling new realizations and running trials for evaluation, but it does not specify explicit training, validation, or test dataset splits. |
| Hardware Specification | Yes | We conducted all of our computational experiments using Python 3.8 and Gurobi 8.1 with a 2.3 Ghz Intel Core i5 processor and 16 GB 2133 MHz LPDDR3 memory. |
| Software Dependencies | Yes | We conducted all of our computational experiments using Python 3.8 and Gurobi 8.1 with a 2.3 Ghz Intel Core i5 processor and 16 GB 2133 MHz LPDDR3 memory. |
| Experiment Setup | Yes | We use δ = 0.5 for the Epinions network. However, since the Facebook messages network is sparse, we set δ = 0.25 in the second instance. and We generate costs randomly for each test from {1, 4, 7, 10} according to the weight vector [0.1, 0.2, 0.4, 0.3]; |