Fairness-Aware Estimation of Graphical Models

Authors: Zhuoping Zhou, Davoud Ataee Tarzanagh, Bojian Hou, Qi Long, Li Shen

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental evaluations on synthetic and real-world datasets demonstrate that our framework effectively mitigates bias without undermining GMs performance.
Researcher Affiliation Academia University of Pennsylvania {zhuopinz@sas., tarzanaq@}upenn.edu {bojian.hou, qlong, li.shen}@pennmedicine.upenn.edu
Pseudocode Yes Algorithm 1 Fair Estimation of GMs (Fair GMs)
Open Source Code Yes Code is available at https://github.com/Penn Shen Lab/Fair_GMs
Open Datasets Yes Experimental evaluations on synthetic and real-world datasets demonstrate that our framework effectively mitigates bias without undermining GMs performance.
Dataset Splits No The paper does not explicitly mention validation splits or specific percentages for training/validation/test. It discusses training on the entire dataset or group-specific data, and evaluating fairness metrics.
Hardware Specification Yes These experiments are conducted on an Apple M2 Pro processor.
Software Dependencies No The paper mentions software components like 'scipy.optimize.minimize' and various algorithms (e.g., 'QUIC', 'PISTA') but does not specify their version numbers, which are required for a reproducible description of software dependencies.
Experiment Setup Yes The initial iterate Θ(0) is chosen based on the highest graph disparity error among local graphs. This initialization can improve fairness by minimizing larger disparity errors. The ℓ1-norm coefficient λ is fixed for each dataset, searched over a grid in {1e 5, . . . , 0.01, . . . , 0.1, 1}. Tolerance ϵ is set to 1e 5, with a maximum of 1e+7 iterations. The initial value of ℓis 1e 2, undergoing a line search at each iteration t with a decay rate of 0.1.