Fairness and Bias in Online Selection

Authors: Jose Correa, Andres Cristi, Paul Duetting, Ashkan Norouzi-Fard

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we empirically validate our results on synthetical and real-world experiments1. We present experiments for the multi-color secretary problem in Section 4.1 and the multi-color prophet problem in Section 4.2.
Researcher Affiliation Collaboration 1Department of Industrial Engineering, Universidad de Chile, Santiago, Chile. 2Google Research, Z urich, Switzerland.
Pseudocode Yes Algorithm 1 GROUPTHRESHOLDS(t), Algorithm 2 FAIR GENERAL PROPHET, Algorithm 3 FAIR IID PROPHET
Open Source Code Yes An implementation of these experiments is available at https://github.com/google-research/google-research/tree/master/fairness_and_bias_in_online_selection.
Open Datasets Yes We consider a dataset containing one record for each phone call by a Portuguese banking institution (Moro et al., 2014). We consider a dataset containing the influence of the users of the Pokec social network (Takac & Z abovsk y, 2012).
Dataset Splits No The paper describes its experiments on datasets but does not explicitly state train/validation/test splits with percentages or sample counts, or reference predefined splits for reproduction.
Hardware Specification No The paper does not mention any specific hardware (GPU, CPU, cloud instance type) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers.
Experiment Setup No The paper describes the number of runs for experiments and data distributions for synthetic datasets, but does not provide specific hyperparameters, training configurations, or system-level settings for reproducibility.