Learning fair representation with a parametric integral probability metric

Authors: Dongha Kim, Kunwoong Kim, Insung Kong, Ilsang Ohn, Yongdai Kim

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Moreover, by numerical experiments, we show that our proposed LFR algorithm is computationally lighter and more stable, and the final prediction model is competitive or superior to other LFR algorithms using more complex discriminators. ... This section empirically shows that LFR using the sigmoid IPM (s IPM-LFR) performs well by analyzing supervised and unsupervised LFR tasks.
Researcher Affiliation Academia 1Department of Statistics, Sungshin Women s University 2Data Science Center, Sungshin Women s University 3Department of Statistics, Seoul National University 4Department of Statistics, Inha University. Correspondence to: Yongdai Kim <ydkim0903@gmail.com>.
Pseudocode Yes More detailed descriptions including our pseudo algorithm are in Appendix B. ... In this subsection, we provide the s IPM-LFR algorithm in Algorithm 1.
Open Source Code Yes The Pytorch implemention of the s IPM-LFR is publicly available in https://github.com/kwkimonline/s IPM-LFR.
Open Datasets Yes We analyze three benchmark datasets 1) Adult (Dua & Graff, 2017), 2) COMPAS, and 3) Health.
Dataset Splits Yes We split the training data once more into two parts of the ratio 80% and 20%, each of which is used for training and validation, respectively.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions 'Pytorch' and 'Adadelta (Zeiler, 2012) optimizer' but does not specify version numbers for these software dependencies, which are necessary for full reproducibility.
Experiment Setup Yes For the supervised LFR, we train the encoder h and classifier f by applying the stochastic gradient descent step to the objective function (3) for 400 training epochs... For the unsupervised LFR, we first minimize the formula (4) to optimize the encoder and decoder for 300 training epochs... Afterward, for given label information Y, we train and select the best downstream classifier by minimizing the standard cross-entropy loss for 100 epochs... We apply the Adadelta (Zeiler, 2012) optimizer with a learning rate of 2.0 and a mini-batch size of 512.