Sustaining Fairness via Incremental Learning
Authors: Somnath Basu Roy Chowdhury, Snigdha Chaturvedi
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical evaluations show that Fa IRL is able to make fair decisions while achieving high performance on the target task, outperforming several baselines. |
| Researcher Affiliation | Academia | Somnath Basu Roy Chowdhury, Snigdha Chaturvedi University of North Carolina at Chapel Hill {somnath, snigdha}@cs.unc.edu |
| Pseudocode | Yes | Algorithm 1: Prototype Sampling |
| Open Source Code | Yes | Our implementation of Fa IRL is publicly available at https://github.com/brcsomnath/Fa IRL. |
| Open Datasets | Yes | Biased MNIST. We follow the setup of (Bahng et al. 2020) to generate a synthetic dataset using MNIST (Le Cun et al. 1998)... Biography classification. We re-purpose the BIOS dataset (De-Arteaga et al. 2019)... |
| Dataset Splits | No | The paper discusses training and test sets and an exemplar-based approach for incremental learning. However, it does not explicitly define a separate validation split for hyperparameter tuning or model selection. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, memory) used for running the experiments in the provided text. It mentions "training modern machine learning systems" which implies hardware usage but without specifics. |
| Software Dependencies | No | The paper mentions "Our implementation of Fa IRL is publicly available at https://github.com/brcsomnath/Fa IRL." which might contain software details, but the provided text does not explicitly list any software dependencies with specific version numbers (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | No | The paper mentions that "Additional details of our experimental setup can be found in Appendix B." However, Appendix B is not included in the provided text. The main body only mentions "β is a hyperparameter" (Equation 4) without specifying its value or other common experimental setup details like learning rate, batch size, or number of epochs. |