Enhancing Group Fairness in Online Settings Using Oblique Decision Forests
Authors: Somnath Basu Roy Chowdhury, Nicholas Monath, Ahmad Beirami, Rahul Kidambi, Kumar Avinava Dubey, Amr Ahmed, Snigdha Chaturvedi
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct empirical evaluations on 5 publicly available benchmarks (including vision and language datasets) to show that Aranyani achieves a better accuracy-fairness trade-off compared to baseline approaches. |
| Researcher Affiliation | Collaboration | 1UNC Chapel Hill, 2Google Deep Mind, 3Google Research. {somnath, snigdha}@cs.unc.edu {nmonath, beirami, rahulkidambi, avinavadubey, amra}@google.com |
| Pseudocode | Yes | Algorithm 1 Aranyani Online Learning Algorithm |
| Open Source Code | Yes | Our implementation is available at: https://github.com/brcsomnath/Aranyani/. ... We will make our implementation public once the paper is published. |
| Open Datasets | Yes | We conduct empirical evaluations on 5 publicly available benchmarks (including vision and language datasets)... (a) UCI Adult (Becker & Kohavi, 1996), (b) Census (Dua et al., 2017), and (c) Pro Publica COMPAS (Angwin et al., 2016)... Civil Comments (Do, 2019) toxicity classification and (b) Celeb A (Liu et al., 2015) image classification datasets. |
| Dataset Splits | No | The paper describes an online learning setup where data arrives incrementally and models are evaluated at each step. While it mentions using a 'hyperparameter grid search', it does not provide explicit training, validation, or test dataset splits in the traditional sense for reproducibility, other than mentioning 'training set of each dataset' and 'training and test set' for specific datasets used for online learning. |
| Hardware Specification | No | The paper mentions 'modern accelerators (like GPUs or TPUs)' but does not specify exact hardware models (e.g., NVIDIA A100, specific Intel/AMD CPUs, or detailed TPU versions) used for experiments. |
| Software Dependencies | No | The paper mentions using 'Tensor Flow', 'Adam optimizer', 'Instructor' and 'CLIP models' (via Hugging Face library), and 'Weights & Biases', but it does not specify explicit version numbers for these software dependencies required for reproducible setup. |
| Experiment Setup | Yes | In all experiments (unless otherwise stated), we use oblique forests with 3 trees and each tree has a height of 4 (based on a hyperparameter grid search)... All experiments involving Aranyani were optimized using an Adam (Kingma & Ba, 2015) optimizer with a learning rate of 2 10 3 and Huber parameter δ = 0.01. |