Transferring Fairness under Distribution Shifts via Fair Consistency Regularization

Authors: Bang An, Zora Che, Mucong Ding, Furong Huang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on synthetic and real datasets, including image and tabular data, demonstrate that our approach effectively transfers fairness and accuracy under various types of distribution shifts.
Researcher Affiliation Academia Bang An Department of Computer Science University of Maryland, College Park bangan@umd.edu Zora Che Department of Computer Science Boston University zche@bu.edu Mucong Ding Department of Computer Science University of Maryland, College Park mcding@umd.edu Furong Huang Department of Computer Science University of Maryland, College Park furongh@umd.edu
Pseudocode No The paper includes a training diagram (Figure 2) and describes the algorithm components, but it does not present a formal pseudocode block or algorithm steps labeled “Algorithm”.
Open Source Code Yes Code is available at https://github.com/umd-huang-lab/transfer-fairness.
Open Datasets Yes We use UTKFace [71] as the source data and Fair Face [30] as the target data. [...] The synthetic dataset is adapted from the 3dshapes dataset [31]. [...] We further evaluate our method on the New Adult dataset [19].
Dataset Splits Yes 3. If you ran experiments... (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] see Section D. [...] We set CA as the source domain and all the other states as the target domain.
Hardware Specification No The paper's checklist states “Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] see Section D”. However, Section D is not provided in the given text, so specific hardware details are not available in the main paper.
Software Dependencies No The paper mentions using specific models like VGG16 and MLP but does not list specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup No The paper’s checklist states “Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] see Section D”. However, Section D is not provided in the given text, so specific experimental setup details like hyperparameter values are not available in the main paper.