Unbiased Classification through Bias-Contrastive and Bias-Balanced Learning

Authors: Youngkyu Hong, Eunho Yang

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that our proposed methods significantly improve previous debiasing methods in various realistic datasets. and 4 Experiments We conduct experiments to evaluate how well our proposed method performs debiasing.
Researcher Affiliation Collaboration Youngkyu Hong Naver AI Lab youngkyu.hong@navercorp.com Eunho Yang KAIST, AITRICS eunhoy@kaist.ac.kr and This work was done as a student at KAIST.
Pseudocode No The paper describes mathematical formulations and processes but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Source code for our experiments is publicly available1. 1https://github.com/grayhong/bias-contrastive-learning
Open Datasets Yes For the case where the bias label is available, we evaluate the methods on Celeb A [31] and UTKFace [46], which have biases toward sensitive attributes such as gender or race. For the case where the bias label is unavailable, we use Image Net [36] and Image Net-A [23] to assess whether the bias of our model has been removed. and controlled experiment on Biased MNIST [3], where each digit is highly correlated with certain background color. We use Biased MNIST [3] dataset. As explained in Section 3.4, Biased MNIST is an MNIST [29] dataset
Dataset Splits Yes We explicitly construct a validation set and report the test unbiased accuracy at the epoch with the highest validation unbiased accuracy. and (from Appendix C.3) We split the original dataset into 80% train, 10% validation, and 10% test set by class-balanced sampling. We follow the same procedure for UTKFace dataset.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions models like ResNet18 but does not provide specific version numbers for software dependencies or libraries used for the experiments.
Experiment Setup Yes We pre-train the bias-capturing model for 80 epochs. and full training of 120 epochs. and τ is a temperature hyperparameter. and α is a weight hyperparameter. and references to Appendix C.3 and C.5 for further details, which contain specific hyperparameters like learning rate, batch size, optimizer, and number of epochs.