Fair Selective Classification Via Sufficiency
Authors: Joshua K Lee, Yuheng Bu, Deepta Rajan, Prasanna Sattigeri, Rameswar Panda, Subhro Das, Gregory W Wornell
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4. Experimental Results |
| Researcher Affiliation | Collaboration | 1Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, USA 2MIT-IBM Watson AI Lab, IBM Research, Cambridge, USA. |
| Pseudocode | Yes | Algorithm 1 Training with sufficiency-based regularizer |
| Open Source Code | No | The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper. It only references code for a baseline method. |
| Open Datasets | Yes | We test our method on four binary classification datasets... Adult1, Celeb A2, Civil Comments3, and Che Xpert4. Footnotes provide URLs: 1https://archive.ics.uci.edu/ml/datasets/adult, 2http://mmlab.ie.cuhk.edu.hk/projects/Celeb A.html, 3https://www.kaggle.com/c/jigsaw-unintended-bias-intoxicity-classification/data, 4https://stanfordmlgroup.github.io/competitions/chexpert |
| Dataset Splits | Yes | In all cases, we use the standard train/val/test splits packaged with the datasets and implemented our code in Py Torch. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'implemented our code in Py Torch' but does not provide specific version numbers for PyTorch or other software dependencies. |
| Experiment Setup | Yes | We set λ = 0.7 for all datasets... We then use a two-layer neural network with 80 nodes in the hidden layer for classification... and trained the network for 20 epochs. ...train a Res Net-50 model... for 10 epochs... We pass the data first through a BERT model... We then apply a two-layer neural network to the BERT output with 80 nodes in the hidden layer... trained the model for 20 epochs. ...fine-tuning the Dense Net-121... for 10 epochs... |