Approximation Algorithms for Cascading Prediction Models

Authors: Matthew Streeter

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4. Experiments In this section we evaluate our cascade generation algorithm by applying it to state-of-the-art pre-trained models for the Image Net classification task.
Researcher Affiliation Industry 1Google Research. Correspondence to: Matthew Streeter <mstreeter@google.com>.
Pseudocode Yes Algorithm 1 Confident Model(x; p, ˆq, t)
Open Source Code No The paper does not provide any specific repository links or explicit statements about the release of source code for the described methodology.
Open Datasets Yes We used 25,000 images from the ILSVRC 2012 validation set (Russakovsky et al., 2015) to run the algorithm, and report results on the remaining 25,000 validation images.
Dataset Splits Yes We used 25,000 images from the ILSVRC 2012 validation set (Russakovsky et al., 2015) to run the algorithm, and report results on the remaining 25,000 validation images.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, processor types) used for running its experiments.
Software Dependencies No The paper mentions "TF-Slim library (Silberman & Guadarrama, 2016)" but does not specify its version or any other software dependencies with version numbers.
Experiment Setup Yes For our Image Net experiments, we take top-1 accuracy as the accuracy metric, and predict its value based on a vector of features derived from the model s predicted class probabilities. We use as features (1) the entropy of the vector, (2) the maximum predicted class probability, and (3) the gap between the first and second highest predictions in logit space. Our accuracy model ˆq is fit using logistic regression on a validation set of 25,000 images.