Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

A Tight Bound for Stochastic Submodular Cover

Authors: Lisa Hellerstein, Devorah Kletenik, Srinivasan Parthasarathy

JAIR 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We show that the Adaptive Greedy algorithm of Golovin and Krause achieves an approximation bound of (ln(Q/η)+1) for Stochastic Submodular Cover... In this paper, we prove that the Adaptive Greedy algorithm yields an α(ln(Q/η)+1) approximation bound for Stochastic Submodular Cover; the bound is αH(Q), where H(Q) is the Qth Harmonic number, when the utility function is integer valued.
Researcher Affiliation Collaboration Lisa Hellerstein EMAIL Dept. of Computer Science and Engineering NYU Tandon School of Engineering Brooklyn, NY 11201 USA; Devorah Kletenik EMAIL Dept. of Computer and Information Science Brooklyn College, City University of New York Brooklyn, NY 11210 USA; Srinivasan Parthasarathy EMAIL IBM T. J. Watson Research Center Yorktown Heights, NY 10598 USA
Pseudocode No The paper describes the 'Adaptive Greedy algorithm' verbally, stating 'It repeatedly selects the item which would yield the largest expected increase in utility, per unit cost.' However, it does not present this algorithm or any other procedure in a structured pseudocode block or a clearly labeled algorithm section.
Open Source Code No The paper does not contain any explicit statements about the release of source code, nor does it provide links to a code repository. The paper focuses on theoretical analysis and proofs.
Open Datasets No The paper is theoretical and does not conduct experiments on datasets. Therefore, it does not mention or provide access information for any open datasets.
Dataset Splits No The paper is theoretical and does not conduct experiments with datasets. Consequently, there is no mention of dataset splits such as training, validation, or test sets.
Hardware Specification No The paper presents theoretical work and does not describe any experimental setup involving specific hardware. Therefore, no hardware specifications are mentioned.
Software Dependencies No The paper focuses on theoretical analysis and proofs and does not describe any experimental implementation. Therefore, no specific software dependencies with version numbers are mentioned.
Experiment Setup No The paper is theoretical, providing proofs and analysis of an algorithm's approximation bound. It does not describe any empirical experiments, and therefore, no experimental setup details like hyperparameters or training configurations are provided.