Recruitment Strategies That Take a Chance
Authors: Gregory Kehne, Ariel D. Procaccia, Jingyan Wang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evaluation of our algorithms corroborates these theoretical results. and Finally, we carry out experiments on synthetically generated data (Section 4), focusing on the linear penalty incurred by overshooting. |
| Researcher Affiliation | Academia | Gregory Kehne Harvard University, Ariel D. Procaccia Harvard University, Jingyan Wang Georgia Institute of Technology |
| Pseudocode | Yes | Algorithm 1 PGREEDY and Algorithm 2 ONESIDEDL+1 |
| Open Source Code | Yes | The code to reproduce our simulation results is available at https://github.com/jingyanw/recruitment-uncertainty. |
| Open Datasets | No | We carry out experiments on synthetically generated data (Section 4), focusing on the linear penalty incurred by overshooting. In constructing instances we follow the approach of Purohit et al. [12] in their use of beta distributions to orchestrate different kinds of correlation between xi and pi. The paper states data is synthetically generated and does not provide a link to a dataset. |
| Dataset Splits | No | The paper describes experiments on synthetically generated data to evaluate algorithms, but it does not specify training, validation, or test data splits as it is not a machine learning model training task. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware used for running the experiments. |
| Software Dependencies | No | The paper mentions that code is available for reproduction but does not specify any software dependencies with version numbers. |
| Experiment Setup | Yes | We consider n = 50 and pmin = 0.01 throughout, and explore the greedy heuristics XGREEDY and XPGREEDY, as well as the constant-factor approximation algorithm ONESIDEDL+1 (Algorithm 2), for a range of M and λ. In constructing instances we follow the approach of Purohit et al. [12] in their use of beta distributions to orchestrate different kinds of correlation between xi and pi. We therefore first draw xi Unif[0, 1], and then produce three types of correlation as follows: Negative correlation: pi pmin + (1 xi), 10xi). Positive correlation: pi pmin + (1 pmin) Beta(10xi, 10(1 xi)). No correlation: pi Unif[pmin, 1]. |