Adapting Fairness Interventions to Missing Values

Authors: Raymond Feng, Flavio Calmon, Hao Wang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments with state-of-the-art fairness interventions demonstrate that our adaptive algorithms consistently achieve higher fairness and accuracy than impute-then-classify across different datasets.
Researcher Affiliation Collaboration 1John A. Paulson School of Engineering and Applied Sciences, Harvard University 2MIT-IBM Watson AI Lab
Pseudocode Yes Algorithm 1 Recursive partitioning of missing patterns. Input: D = {xi, si, yi}n i=1 Initialize: partition P = , P = M while P = do Mq P [0] P P \ Mq if G(Mq) = then P P Mq continue end if j arg min G(Mq) L(Mq, j) if L(Mq, j ) < minh H P i Iq ℓ(yi, h(xi)) then P P {Mj0 q , Mj1 q } else P P Mq end if end while Return: P = {Mq}Q q=1
Open Source Code No The paper provides links to third-party implementations of fairness intervention algorithms (Disp Mistreatment, Fair Projection, Leveraging, AIF360 library) that they used for benchmarking, but it does not provide concrete access to the source code for their own adaptive algorithms described in the paper.
Open Datasets Yes We test our adaptive algorithms on COMPAS [34], Adult [12], the IPUMS Adult reconstruction [9, 15], and the High School Longitudinal Study (HSLS) dataset [22].
Dataset Splits No The paper mentions 'different train-test splits' and that for 'missing pattern clustering algorithm, we reserve part of the training set as validation', but it does not provide specific percentages, sample counts, or full details for all train/validation/test dataset splits for reproduction.
Hardware Specification No All experiments were run on a personal computer with 4 CPU cores and 16GB memory.
Software Dependencies No The paper mentions the use of 'AIF360 library' and implicitly 'Python' but does not provide specific version numbers for these or any other key software components.
Experiment Setup Yes For algorithms with tunable hyperparameters used in the experiments, we report the values of the hyperparameters that were tested in Table 3.