Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Trading-Off Cost of Deployment Versus Accuracy in Learning Predictive Models
Authors: Daniel P. Robinson, Suchi Saria
IJCAI 2016 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments. We use MIMIC-II [Saeed et al., 2011], a large publicly available dataset of electronic health records from patients admitted to four different ICUs at the Beth Israel Deaconess Medical Center over a seven year period. We split the individuals into training (75%) and test (25%) sets. The receiver operating characteristic (ROC) curve and area under that curve (AUC) are obtained. |
| Researcher Affiliation | Academia | Daniel P. Robinson Johns Hopkins University Department of Applied Mathematics and Statistics Baltimore, Maryland EMAIL Suchi Saria Johns Hopkins University Department of Computer Science Baltimore, Maryland EMAIL |
| Pseudocode | No | The paper describes mathematical formulations and processes but does not include a clearly labeled pseudocode block or algorithm. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing code or a link to a source code repository for the methodology described. |
| Open Datasets | Yes | We use MIMIC-II [Saeed et al., 2011], a large publicly available dataset of electronic health records from patients admitted to four different ICUs at the Beth Israel Deaconess Medical Center over a seven year period. |
| Dataset Splits | No | We split the individuals into training (75%) and test (25%) sets. From the training set, we process the data using a sliding window to extract positive and negative samples consisting of the features observed at a given time, and an associated label that is positive if septic shock was occured within the following 48 hours and negative otherwise. Since the dataset is imbalanced, we subsample the negative pairs to obtain a balanced training set. |
| Hardware Specification | Yes | For example, constructing the regularizer for the ICU application took approximately 10 seconds on a Mac Book Air laptop (1.8 GHz Intel Core i5 processor with 4GB of RAM). |
| Software Dependencies | No | The paper mentions using 'MEXFISTGRAPH routine in SPAMS' and 'Sym Py' but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | The maximum allowed iteration limit was set to 5,000 and the termination tolerance (duality gap) to 10 3. For each of these scenarios, we select values for ฮป$ and ฮปtime from an equally spaced grid over the interval [10 3, 10 7]... |