Distribution-Free Calibration Guarantees for Histogram Binning without Sample Splitting

Authors: Chirag Gupta, Aaditya Ramdas

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We prove calibration guarantees for the popular histogram binning (also called uniform-mass binning) method of Zadrozny and Elkan (2001). Histogram binning has displayed strong practical performance, but theoretical guarantees have only been shown for sample split versions that avoid double dipping the data. We demonstrate that the statistical cost of sample splitting is practically significant on a credit default dataset. We then prove calibration guarantees for the original method that double dips the data, using a certain Markov property of order statistics. Based on our results, we make practical recommendations for choosing the number of bins in histogram binning. In our illustrative simulations, we propose a new tool for assessing calibration validity plots which provide more information than an ECE estimate.
Researcher Affiliation Academia 1Carnegie Mellon University. Correspondence to: Chirag Gupta <chiragg@cmu.edu>.
Pseudocode Yes Algorithm 1 UMD: Uniform-mass binning without sample splitting
Open Source Code Yes Relevant code can be found at https://github.com/aigen/df-posthoc-calibration
Open Datasets Yes Figure 1b uses validity plots to assess UMS and UMD on CREDIT, a UCI credit default dataset2. 2Yeh and Lien (2009); https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients
Dataset Splits Yes The experimental protocol is as follows. The entire feature matrix is first normalized3. CREDIT has 30K (30,000) samples which are randomly split (once for the entire experiment) into splits (A, B, C) = (10K, 5K, 15K). First, g is formed by training a logistic regression model on split A and then re-scaling the learnt model using Platt scaling on split B (Platt scaling before binning was suggested by Kumar et al. (2019); we also observed that this helps in practice).
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory specifications) used for running the experiments.
Software Dependencies No The paper mentions 'Python s sklearn.preprocessing.scale' but does not specify version numbers for Python or scikit-learn, or any other software components.
Experiment Setup Yes For a given subsample, UMS/UMD with B 10 is trained on the calibration set (with 50:50 sample splitting for UMS), and p V pεq for every ε is estimated on the test set. Finally, the (mean std-dev-of-mean) of p V pεq is plotted with respect to ε.