Fair Secretaries with Unfair Predictions

Authors: Eric Balkanski, Will Ma, Andreas Maggiori

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we extend to the k-secretary problem and complement our theoretical analysis with experiments. and 5 Experiments We simulate our ADDITIVE-PEGGING and MULTIPLICATIVE-PEGGING algorithms in the exact experimental setup of Fujii and Yoshida [30], to test its average-case performance.
Researcher Affiliation Academia Eric Balkanski Columbia University eb3224@columbia.edu Will Ma Columbia University wm2428@gsb.columbia.edu Andreas Maggiori Columbia University am6292@columbia.edu
Pseudocode Yes Algorithm 1 ADDITIVE-PEGGING and Algorithm 2 k-PEGGING
Open Source Code Yes Since ADDITIVE-PEGGING and MULTIPLICATIVE-PEGGING achieve almost the same competitive ratio and fairness for all instance types and values of ε we only present ADDITIVE-PEGGING in figure 1 but include the code of both in the supplementary material.
Open Datasets Yes We simulate our ADDITIVE-PEGGING and MULTIPLICATIVE-PEGGING algorithms in the exact experimental setup of Fujii and Yoshida [30], to test its average-case performance. Fujii and Yoshida [30] generate various types of instances. We follow their Almost-constant, Uniform, and Adversarial types of instances, and also create the Unfair type of instance to further highlight how slightly biased predictions can lead to very unfair outcomes.
Dataset Splits No For each type of instance and value of ε in this set, we randomly generate 10000 instances, and then run each algorithm on each instance. The paper does not describe traditional train/validation/test dataset splits, as it evaluates online algorithms on generated instances rather than training a model on a fixed dataset.
Hardware Specification Yes Our code is written in Python 3.11.5 and we conduct experiments on an M3 Pro CPU with 18 GB of RAM.
Software Dependencies No Our code is written in Python 3.11.5 and we conduct experiments on an M3 Pro CPU with 18 GB of RAM. The paper only specifies the Python version without listing any specific versioned libraries or solvers.
Experiment Setup Yes Following [30], we set the number of candidates to be n = 100. We experiment with all values of ε in {0, 1/20, 2/20, . . . , 19/20}. For each type of instance and value of ε in this set, we randomly generate 10000 instances, and then run each algorithm on each instance. and details on instance generation such as “ˆui = δi ui, where δi is sampled uniformly and independently from [1 ε, 1 + ε].”