Towards Counterfactual Fairness-aware Domain Generalization in Changing Environments

Authors: Yujie Lin, Chen Zhao, Minglai Shao, Baoluo Meng, Xujiang Zhao, Haifeng Chen

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical validation on synthetic and authentic datasets substantiates the efficacy of our approach, demonstrating elevated accuracy levels while ensuring the preservation of fairness amidst the evolving landscape of continuous domains. and 6 Experiments section.
Researcher Affiliation Collaboration 1School of New Media and Communication, Tianjin University, China 2Department of Computer Science, Baylor University, USA 3GE Aerospace Research, USA 4NEC Labs America, USA
Pseudocode Yes Algorithm 1 Optimization procedure for DCFDG
Open Source Code No The paper does not provide an explicit statement about the availability of its source code, nor does it include any links to a code repository.
Open Datasets Yes Adult [Kohavi and others, 1996] contains a diverse set of attributes pertaining to individuals in the United States. and Chicago Crime [Zhao and Chen, 2020] dataset includes a comprehensive compilation of criminal incidents in different communities across Chicago city in 2015.
Dataset Splits Yes We partitioned the domains into source, intermediary, and target domains by the ratio (1/3). The source domains are employed for training the DCFDG, while the intermediary domains serves as the validation set. All evaluations are conducted within the target domains.
Hardware Specification No The paper describes the experimental setup and training process, but it does not specify any particular hardware components such as GPU or CPU models used for running the experiments.
Software Dependencies No The paper mentions neural network components (e.g., LSTM, VAE, fully connected layers, ReLU) but does not specify any software dependencies with version numbers (e.g., Python version, specific deep learning framework versions like PyTorch or TensorFlow).
Experiment Setup Yes We partitioned the domains into source, intermediary, and target domains by the ratio (1/3). and We varied the parameter λf across five values ([0.02, 0.1, 0.2, 0.5, 1]) to obtain the results of each baseline under these five settings. and For all the encoders, decoders, classifiers, and discriminators, we employed the most common fully connected layers and ReLU activation functions. The specific architecture details can be found in Appendix B.3.