fairret: a Framework for Differentiable Fairness Regularization Terms

Authors: Maarten Buyl, MaryBeth Defrance, Tijl De Bie

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show the behavior of their gradients and their utility in enforcing fairness with minimal loss of predictive power compared to baselines. Our contribution includes a Py Torch implementation of the FAIRRET framework. We visualize the FAIRRETs gradients and evaluate their empirical performance in enforcing fairness notions compared to baselines.
Researcher Affiliation Academia Maarten Buyl Ghent University maarten.buyl@ugent.be Mary Beth Defrance Ghent University marybeth.defrance@ugent.be Tijl De Bie Ghent University tijl.debie@ugent.be
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks. Appendix E provides 'Code Use Examples' which are snippets of PyTorch code, not pseudocode.
Open Source Code Yes The framework is available as a package at https://github.com/aida-ugent/fairret.
Open Datasets Yes Experiments were conducted on the Bank (Moro et al., 2014), Credit Card (Yeh & hui Lien, 2009), Law School4, and ACSIncome (Ding et al., 2021) datasets.
Dataset Splits Yes To find these hyperparameters, we took the 80%/20% train/test split already generated for each seed, and further divided the train set into a smaller train set and a validation set with relative sizes 80% and 20% respectively.
Hardware Specification Yes All experiments in Sec. 4 were conducted on an internal server equipped with a 12 Core Intel(R) Xeon(R) Gold processor and 256 GB of RAM.
Software Dependencies No The paper mentions 'Py Torch', 'cvxpy', and 'Adam optimizer' but does not specify their version numbers or other key software dependencies with version information required for reproducibility.
Experiment Setup Yes The classifier ℎwas a fully connected neural net with hidden layers of sizes [256, 128, 32] followed by a sigmoid... optimized with the Adam optimizer implementation of Py Torch, with a learning rate of 0.001 and a batch size of 4096. The loss was minimized over 100 epochs, with 𝜆= 0 for the first 20 to avoid constraining ℎbefore it learns anything.