Smooth Interactive Submodular Set Cover
Authors: Bryan D. He, Yisong Yue
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compared our multiple threshold method against multiple baselines (see Appendix D for more details) in a range of simulation settings (see Appendix E.1). Figure 4 shows the results. We see that our approach is consistently amongst the best performing methods. The primary competitor is the circuit of constraints approach from [11] (see Appendix D.3 for a comparison of the theoretical guarantees). We also note that all approaches dramatically outperform their worst-case guarantees. |
| Researcher Affiliation | Academia | Bryan He Stanford University bryanhe@stanford.edu Yisong Yue California Institute of Technology yyue@caltech.edu |
| Pseudocode | Yes | Algorithm 1 Worst Case Greedy Algorithm for Smooth Interactive Submodular Set Cover |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for its methodology is open-source or publicly available. |
| Open Datasets | No | The paper describes simulation experiments where data is generated for the purpose of the simulation: 'We generate a random user-item matrix of size M = 100 100 with ratings uniformly drawn from {0, 1, . . . , 5}. We generate N = 100 hypotheses...'. It does not utilize or provide concrete access information for a publicly available or open dataset. |
| Dataset Splits | No | The paper describes simulation experiments but does not specify training, validation, or test dataset splits or a cross-validation setup for reproduction. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the simulation experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies, such as programming languages or library versions, used in the experiments. |
| Experiment Setup | Yes | Appendix E.1 'Simulation Settings' describes how the simulation data was generated and configured: 'We generate a random user-item matrix of size M = 100 100 with ratings uniformly drawn from {0, 1, . . . , 5}. We generate N = 100 hypotheses, where each hypothesis h has a unique threshold function αh(·) (chosen as described in Section E.1.1) and a unique utility function Fh(·) (chosen as described in Section E.1.2).' |