Counterfactually Fair Representation

Authors: Zhiqun Zuo, Mahdi Khalili, Xueru Zhang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments (across different causal models, datasets, and fairness definitions) to compare our method with existing methods. Empirical results show that 1) our method outperforms the method of only using non-descendants of sensitive attributes; 2) existing heuristic methods for training ML model under CF fall short of achieving perfect CF fairness.
Researcher Affiliation Collaboration 1CSE Department, The Ohio State University, Columbus, OH 43210 2Yahoo Research, New York, NY, 10003
Pseudocode Yes Algorithm 1 CF Representation Generation h(x, a; M, s) ... Algorithm 2 PCF Representation Generation h(x, a; M, s, XPc GA)
Open Source Code Yes 1The code repository for this work can be found in https://github.com/osu-srml/CF_ Representation_Learning
Open Datasets Yes We use the Law School Success dataset [40] and the UCI Adult Income Dataset [25] to evaluate our proposed method.
Dataset Splits Yes We split each dataset into a training set, validation set, and test set with a ratio of 60%-20%-20%.
Hardware Specification Yes We conducted our experiments using a supercomputing platform. The CPUs used were Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90GHz, and the GPU model was a Tesla V100.
Software Dependencies Yes Our primary software environments were Python 3.9, Pytorch 1.12.1, and CUDA 10.2.
Experiment Setup Yes The batch size was set to 256 and the learning rate to 0.001. The experiments for the UF, CA, ICA, and CR methods were based on the same VAE. ... For the finding predictors, we used the linear regression model for the Law School Success Dataset and the logistic regression model for the UCI Adult Income dataset. When training the CR model, we set the coefficient of the regularization term as 0.002.