Long-Term Fairness with Unknown Dynamics
Authors: Tongxin Yin, Reilly Raab, Mingyan Liu, Yang Liu
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | For the classification setting subject to group fairness, we compare our proposed algorithm to several baselines, including the repeated retraining of myopic or distributionally robust classifiers, and to a deep reinforcement learning algorithm that lacks fairness guarantees. Our experiments model human populations according to evolutionary game theory and integrate real-world datasets. |
| Researcher Affiliation | Collaboration | Tongxin Yin Electrical and Computer Engineering University of Michigan Ann Arbor, MI 48109 tyin@umich.edu Reilly Raab Computer Science and Engineering University of California, Santa Cruz Santa Cruz, CA 95064 reilly@ucsc.edu Mingyan Liu Electrical and Computer Engineering University of Michigan Ann Arbor, MI 48109 mingyan@umich.edu Yang Liu University of California, Santa Cruz Byte Dance Research Santa Cruz, CA 95064 yangliu@ucsc.edu |
| Pseudocode | Yes | Algorithm 1 L-UCBFair |
| Open Source Code | No | The paper mentions 'Figures were generated using code included in the supplementary material', but does not explicitly state that the source code for their proposed methodologies (L-UCBFair or R-TD3 implementation) is open-source or provide a link to a repository for the methodology itself. |
| Open Datasets | Yes | Using a modeled population with scalar features fit to the Adult dataset (Dua and Graff, 2017) at each time-step to mirror the evolving qualification rates (Appendix A.2)... |
| Dataset Splits | No | The paper mentions training agents and comparing algorithms but does not specify explicit train/validation/test dataset splits with percentages or sample counts for reproduction. |
| Hardware Specification | Yes | We run all the experiment on a single 1080Ti GPU. Figures were generated using code included in the supplementary material in less than 24 hours on a single Nvidia RTX A5000. |
| Software Dependencies | No | The paper states: 'We implement the R-TD3 agent using Stable Baseline3 (Raffin et al., 2021). The neural network is implemented using Pytorch (Paszke et al., 2019).' While it names the software, it does not provide specific version numbers for Stable Baselines3 or Pytorch. |
| Experiment Setup | Yes | The inputs of the network are state and action, passing through fully connected (fc) layers with size 256, 128, 64, 64. Re LU is used as activation function between fc layers, while a Soft Max layer is applied after the last fc layer... We use Adam as the optimizer. Weight decay is set to 1e-4 and learning rate is set to 1e-3, while batch size is 128. |