Conformal Classification with Equalized Coverage for Adaptively Selected Groups

Authors: Yanfei Zhou, Matteo Sesia

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the validity and effectiveness of this method on simulated and real data sets.Section 3 demonstrates the empirical performance of AFCP on synthetic and real data.
Researcher Affiliation Academia Yanfei Zhou Department of Data Sciences and Operations University of Southern California Los Angeles, California, USA yanfei.zhou@marshall.usc.edu and Matteo Sesia Department of Data Sciences and Operations University of Southern California Los Angeles, California, USA sesia@marshall.usc.edu
Pseudocode Yes Algorithm 1 Automatic attribute selection using a placeholder test label., Algorithm 2 Adaptively Fair Conformal Prediction (AFCP).
Open Source Code Yes Software implementing the algorithms and data experiments are available online at https://github. com/Fiona Z3696/Adaptively-Fair-Conformal-Prediction.
Open Datasets Yes We apply AFCP and its benchmarks to the open-domain Nursery data set [49]..., ...additional experimental results using the open-source COMPAS dataset [50]., and We apply our method to the open-domain Adult Income dataset [51].
Dataset Splits Yes In each case, 50% of the samples are used for training and the remaining 50% for calibration. and In each experiment, 50% of the samples are randomly assigned for training and the remaining 50% for calibration.
Hardware Specification No The numerical experiments described in this paper were carried out on a computing cluster. Individual experiments... required less than 25 minutes and 5GB of memory on a single CPU.
Software Dependencies No The paper mentions software components like 'sklearn Python package', 'Adam optimizer', 'Re LU activation function', 'softmax function', and 'cross-entropy loss function', but does not provide specific version numbers for any of these software dependencies.
Experiment Setup Yes For all methods considered, the classifier is based on a five-layer neural network with linear layers interconnected via a Re LU activation function. The output layer uses a softmax function to estimate the conditional label probabilities. The Adam optimizer and cross-entropy loss function are used in the training process, with a learning rate set at 0.0001. The loss values demonstrate convergence after 100 epochs of training. For all methods, the miscoverage target level is set at α = 0.1.