Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Transferring Fairness under Distribution Shifts via Fair Consistency Regularization
Authors: Bang An, Zora Che, Mucong Ding, Furong Huang
NeurIPS 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on synthetic and real datasets, including image and tabular data, demonstrate that our approach effectively transfers fairness and accuracy under various types of distribution shifts. |
| Researcher Affiliation | Academia | Bang An Department of Computer Science University of Maryland, College Park EMAIL Zora Che Department of Computer Science Boston University EMAIL Mucong Ding Department of Computer Science University of Maryland, College Park EMAIL Furong Huang Department of Computer Science University of Maryland, College Park EMAIL |
| Pseudocode | No | The paper includes a training diagram (Figure 2) and describes the algorithm components, but it does not present a formal pseudocode block or algorithm steps labeled “Algorithm”. |
| Open Source Code | Yes | Code is available at https://github.com/umd-huang-lab/transfer-fairness. |
| Open Datasets | Yes | We use UTKFace [71] as the source data and Fair Face [30] as the target data. [...] The synthetic dataset is adapted from the 3dshapes dataset [31]. [...] We further evaluate our method on the New Adult dataset [19]. |
| Dataset Splits | Yes | 3. If you ran experiments... (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] see Section D. [...] We set CA as the source domain and all the other states as the target domain. |
| Hardware Specification | No | The paper's checklist states “Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] see Section D”. However, Section D is not provided in the given text, so specific hardware details are not available in the main paper. |
| Software Dependencies | No | The paper mentions using specific models like VGG16 and MLP but does not list specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | No | The paper’s checklist states “Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] see Section D”. However, Section D is not provided in the given text, so specific experimental setup details like hyperparameter values are not available in the main paper. |