Chasing Fairness Under Distribution Shift: A Model Weight Perturbation Approach
Authors: Zhimeng (Stephen) Jiang, Xiaotian Han, Hongye Jin, Guanchu Wang, Rui Chen, Na Zou, Xia Hu
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the effectiveness of our proposed RFR algorithm on synthetic and real distribution shifts across various datasets. Experimental results demonstrate that RFR achieves better fairness-accuracy trade-off performance compared with several baselines. |
| Researcher Affiliation | Collaboration | Zhimeng Jiang1 , Xiaotian Han1 , Hongye Jin1, Guanchu Wang2, Rui Chen3, Na Zou1, Xia Hu2 1Texas A&M University, 2Rice University, 3Samsung Electronics America |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The source code is available at https://github.com/zhimengj0326/RFR_NeurIPS23. |
| Open Datasets | Yes | We adopt the following datasets in our experiments. UCI Adult [13] dataset... ACS-Income [14]... ACS-Employment [14]... |
| Dataset Splits | No | The paper mentions "source dataset DS" for training and "target dataset DT" for evaluation under distribution shift, but it does not specify a distinct validation set or its split percentages/counts for hyperparameter tuning or early stopping. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types) used to run the experiments. |
| Software Dependencies | No | The paper mentions using "Adam optimizer" but does not specify version numbers for any key software components or libraries (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We adopt Adam optimizer with 10 5 learning rate and 0.01 weight decay for all models. For baseline ADV, we alternatively train classification and adversarial networks with 70 and 30 epochs, respectively. The hyperparameters for ADV are set as {0.0, 1.0, 10.0, 100.0, 500.0}. For adding regularization, we adopt the hyperparameters set {0.0, 0.5, 1.0, 10.0, 30.0, 50.0}. |