Self-Supervised Fair Representation Learning without Demographics

Authors: Junyi Chai, Xiaoqian Wang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that our proposed method achieves better or comparable performance than state-of-the-art methods on three datasets in terms of accuracy and several fairness metrics.4 Experiments
Researcher Affiliation Academia Junyi Chai, Xiaoqian Wang Elmore Family School of Electrical and Computer Engineering Purdue University West Lafayette, IN 47906 {chai28,joywang}@purdue.edu
Pseudocode Yes Algorithm 1 Optimization Algorithm
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes]
Open Datasets Yes Celeb A (Liu et al., 2015): The dataset contains...Adult (Dua and Graff, 2017) : The dataset contains...COMPAS (Larson et al., 2016): The dataset contains...
Dataset Splits Yes We repeat experiments on each dataset three times and report the average results and in each repetition we randomly spilt data into 64% training data, 16% validation data and 20% test data.
Hardware Specification Yes We implement our method in Py Torch 1.10.1 with one NVIDIA RTX-3090 GPU.
Software Dependencies Yes We implement our method in Py Torch 1.10.1 with one NVIDIA RTX-3090 GPU.
Experiment Setup Yes All hyperparameters are tuned to find the best validation accuracy. The values of hyperparameter in our method are set by performing cross-validation on training data in the value range of 0.1 to 10. The hyperparameters for the comparing methods are tuned as suggested in the original paper (Hardt et al., 2016; Hashimoto et al., 2018; Hwang et al., 2020).