Sampling Ex-Post Group-Fair Rankings

Authors: Sruthi Gorantla, Amit Deshpande, Anand Louis

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We give empirical evidence that our algorithms compare favorably against recent baselines for fairness and ranking utility on real-world data sets.
Researcher Affiliation Collaboration 1Indian Institute of Science, Bengaluru 2Microsoft Research, Bengaluru
Pseudocode Yes Algorithm 1 Sampling a uniform random group-fair representation; Algorithm 2 Sampling an approximately uniform random group-fair representation
Open Source Code Yes Implementation of our algorithms and the baselines has been made available for reproducibility github.com/sruthigorantla/Sampling Ex Post Group Fair Rankings.
Open Datasets Yes We evaluate our results on the German Credit Risk dataset comprising credit risk scoring of 1000 adult German residents [Dua and Graff, 2017]... We also evaluate our algorithm on the IIT-JEE 2009 dataset, also used in Celis et al. (2020b).
Dataset Splits No The paper does not specify training, validation, or test dataset splits in the context of splitting data for model training/evaluation, as their algorithms are sampling-based and operate on existing in-group rankings rather than trained models.
Hardware Specification Yes The experiments were run on a Quad-Core Intel Core i5 processor consisting of 4 cores, with a clock speed of 2.3 GHz and DRAM of 8GB.
Software Dependencies No The paper mentions using 'Polytope Sampler' and 'Matlab' (in a footnote) but does not provide specific version numbers for these software components used in the experiments.
Experiment Setup Yes We use k = 100 and b = 50 in the experiments. We sample 1000 rankings for randomized algorithms and output the mean and standard deviation.