PAC Confidence Predictions for Deep Neural Network Classifiers
Authors: Sangdon Park, Shuo Li, Insup Lee, Osbert Bastani
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our experiments, we demonstrate that our approach can be used to provide guarantees for state-of-the-art DNNs. |
| Researcher Affiliation | Academia | Sangdon Park, Shuo Li, Insup Lee & Osbert Bastani PRECISE Center University of Pennsylvania {sangdonp, lishuo1, lee, obastani}@seas.upenn.edu |
| Pseudocode | No | The paper describes algorithms in text and mathematical formulations but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide a direct link to a code repository or explicitly state that source code for the methodology is available. |
| Open Datasets | Yes | We use the Image Net dataset (Russakovsky et al., 2015) and Res Net101 (He et al., 2016) for evaluation. |
| Dataset Splits | Yes | We split the Image Net validation set into 20, 000 calibration and 10, 000 test images. |
| Hardware Specification | No | The paper mentions evaluating CPU and GPU time and using the Py Torch profiler, but it does not specify the exact models or specifications of the CPUs, GPUs, or any other hardware used for the experiments. |
| Software Dependencies | No | The paper mentions "Py Torch profiler" but does not specify any version numbers for PyTorch or other software dependencies. |
| Experiment Setup | Yes | default parameters of ˆC are K = 20, n = 20, 000, and δ = 10 2. ... For the cascading classifier, we use the original Res Net101 as the slow network, and add a single exit branch (i.e., M = 2) at a quarter of the way from the input layer. ... The algorithm takes ... the desired relative error ξ [0, 1], a confidence level δ [0, 1]. ... Our algorithm takes the confidence predictor ˆf, desired bound ξ R>0 on the unsafety probability, confidence level δ [0, 1], calibration set W X of rollouts ζ Dˆπ, and calibration set Z O of samples from distribution D. |