Group Fairness by Probabilistic Modeling with Latent Fair Decisions

Authors: YooJung Choi, Meihua Dang, Guy Van den Broeck12051-12059

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show on real-world datasets that our approach not only is a better model of how the data was generated than existing methods but also achieves competitive accuracy. Moreover, we also evaluate our approach on a synthetic dataset in which observed labels indeed come from fair labels but with added bias, and demonstrate that the fair labels are successfully retrieved.
Researcher Affiliation Academia Yoo Jung Choi, Meihua Dang, Guy Van den Broeck Computer Science Department University of California, Los Angeles {yjchoi,mhdang,guyvdb}@cs.ucla.edu
Pseudocode No The paper describes algorithms but does not include structured pseudocode or an algorithm block.
Open Source Code No The paper does not provide an explicit link to source code for its methodology. The link in Appendix A points to the paper itself, not external code.
Open Datasets Yes We use three datasets: COMPAS, Adult, and German (Dua and Graff 2017), which are commonly studied benchmarks for fair ML. Dua, D.; and Graff, C. 2017. UCI Machine Learning Repository. Online Resource URL http://archive.ics.uci.edu/ml.
Dataset Splits Yes We generated different synthetic datasets with the number of non-sensitive features ranging from 10 to 30, using 10-fold CV for each.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers.
Experiment Setup No The paper mentions data generation and pre-processing steps but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, epochs) for model training.