Individual Calibration with Randomized Forecasting

Authors: Shengjia Zhao, Tengyu Ma, Stefano Ermon

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We assess the benefit of forecasters trained with the new objective on two applications: fairness and decision making under uncertainty against adversaries. Calibration on protected groups traditionally has been a definition for fairness (Kleinberg et al., 2016). On a UCI crime prediction task, we show that forecasters trained for individual calibration achieve lower calibration error on protected groups without knowing these groups in advance.We perform simulations to verify that individual calibrated forecasters are less exploitable and can achieve higher utility in practice.Experiment Details. We use the UCI crime and communities dataset (Dua & Graff, 2017) and we predict the crime rate based on features of the neighborhood (such as racial composition). For training details and network architecture see Appendix B.3.The results are shown in Figure 3 and 4.
Researcher Affiliation Academia 1Computer Science Department, Stanford University. Correspondence to: Shengjia Zhao <sjzhao@stanford.edu>, Tengyu Ma <tengyuma@stanford.edu>, Stefano Ermon <ermon@stanford.edu>.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes 2https://github.com/Shengjia Zhao/Individual-Calibration
Open Datasets Yes We use the UCI crime and communities dataset (Dua & Graff, 2017).URL http://archive.ics.uci.edu/ml.
Dataset Splits No The paper mentions 'validation samples' and 'random training/validation partitions' but does not provide specific split percentages or sample counts for the validation set in the provided text.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes We will model uncertainty with Gaussian distributions {N(µ, σ2), µ R, σ R+}. We parameterize h = hθ as a deep neural network with parameters θ. The neural networks takes in concatenation of x and r and outputs the µ, σ that decides the returned CDF.We use a hyper-parameter α to trade-off between the two objectives: Lα(θ) = (1 α)LPAIC(θ) + αLNLL(θ) (4).We compare different values of α [0.1, 1.0].