Counterfactual Fairness with Disentangled Causal Effect Variational Autoencoder

Authors: Hyemi Kim, Seungjae Shin, JoonHo Jang, Kyungwoo Song, Weonyoung Joo, Wanmo Kang, Il-Chul Moon8128-8136

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments; Table 1: The total effect and counterfactual effect of real and generated datasets (O = {race, native country}). CE error is P i,j {0,1} | oij o ij 4 | with true CE, o . The numbers in bold indicates the best performance, and the underlined numbers indicate the second best performance.; Table 2 shows the result of counterfactual generation on Mustache.
Researcher Affiliation Academia Hyemi Kim1, Seungjae Shin1, Joon Ho Jang1, Kyungwoo Song2, Weonyoung Joo1, Wanmo Kang3 and Il-Chul Moon1 1Department of Industrial and Systems Engineering, Korea Advanced Institute of Science and Technology (KAIST) 2Department of AI, University of Seoul, 3Department of Mathematical Sciences, KAIST
Pseudocode Yes Algorithm 1 in Appendix 3.4 specifies the sampling from q(a, ur, ud) and q(a, ur)q(ud) under the conditions, and we apply the permutation to minimize LTC.
Open Source Code No The paper does not provide any specific links or explicit statements about the release of its source code.
Open Datasets Yes We use the UCI Adult income dataset (Dua and Graff 2017) for causal estimation and fair classification tasks. We use the Celeb A dataset (Liu et al. 2018) for the counterfactual image generation.
Dataset Splits No The paper mentions training and testing sets, but does not provide specific information about validation splits or their proportions.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions various models and libraries (e.g., logistic regression, SVM, ResNet-18, Arc Face) but does not specify their version numbers or other software dependencies.
Experiment Setup No The paper describes general experimental settings like dataset usage and evaluation metrics, but does not provide specific hyperparameter values or detailed training configurations (e.g., learning rate, batch size, number of epochs) in the main text.