Fairness-Aware Meta-Learning via Nash Bargaining

Authors: Yi Zeng, Xuelin Yang, Li Chen, Cristian Ferrer, Ming Jin, Michael Jordan, Ruoxi Jia

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method is supported by theoretical results, notably a proof of the NBS for gradient aggregation free from linear independence assumptions, a proof of Pareto improvement, and a proof of monotonic improvement in validation loss. We also show empirical effects across various fairness objectives in six key fairness datasets and two image classification tasks.
Researcher Affiliation Collaboration Yi Zeng 1, Xuelin Yang 2, Li Chen3, Cristian Canton Ferrer3, Ming Jin1, Michael I. Jordan2, Ruoxi Jia1 1Virginia Tech, Blacksburg, VA 24061, USA 2University of California, Berkeley, CA 94720, USA 3Meta AI, Menlo Park, CA 94025, USA
Pseudocode Yes Algorithm 1: Two-stage Nash-Meta-Learning Training
Open Source Code Yes Code: Nash-Meta-Learning. ... We provide the code at https://github.com/reds-lab/ Nash-Meta-Learning with detailed instruction included.
Open Datasets Yes We test our method on six standard fairness datasets across various sectors of fairness tasks: financial services (Adult Income [3], Credit Default [54]), marketing (Bank Telemarketing [33]), criminal justice (Communities and Crime [42]), education (Student Performance [11]), and disaster response (Titanic Survival [12]).
Dataset Splits Yes Test sets comprise 3% of each dataset (10% for the student performance dataset with 649 samples) by randomly selecting a demographically and label-balanced subset. See Table 2 in Appendix A.6 for data distribution specifics.
Hardware Specification Yes All experiments were conducted on an internal cluster using one chip of H-100.
Software Dependencies No The paper mentions software like ResNet-18 and uses Python-based tools implied by the GitHub link, but it does not specify exact version numbers for any software libraries, frameworks, or dependencies used in the experiments.
Experiment Setup Yes Common hyperparameters across all algorithms include a total of 50 training epochs, an SGD optimizer momentum of 0.9, and a weight decay of 5e-4, with the bargaining phase limited to 15 epochs for the three settings incorporating proposed Nash-Meta-Learning. Hyperparameters that varied are detailed in Table 3.