Robust Rent Division

Authors: Dominik Peters, Ariel D. Procaccia, David Zhu

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We end with some experiments on data taken from Spliddit. They suggest that our three new rules significantly outperform the Spliddit maximin rule on robustness metrics.
Researcher Affiliation Academia Dominik Peters CNRS, Université Paris Dauphine PSL dominik@lamsade.fr Ariel D. Procaccia Harvard University arielpro@seas.harvard.edu David Zhu Harvard University david.zhu@gmail.com
Pseudocode No The paper describes algorithms but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] In the supplemental material, but not including the data.
Open Datasets No We evaluated our rules on user data taken from Spliddit. This dataset was kindly provided to us in anonymized form by the maintainer of Spliddit, Nisarg Shah. ... We use a proprietary dataset from Spliddit.org, whose creators we cite [Goldman and Procaccia, 2014].
Dataset Splits No The paper describes drawing samples and evaluating performance on them but does not specify a distinct 'validation' split or percentage for model training as typically understood in machine learning.
Hardware Specification Yes Figure 3 shows average computation time to compute allocations optimizing EFrate S and envy S, using Gurobi 9.1.2 on four threads of an AMD Ryzen 2990WX (128 GB RAM).
Software Dependencies Yes Figure 3 shows average computation time to compute allocations optimizing EFrate S and envy S, using Gurobi 9.1.2 on four threads of an AMD Ryzen 2990WX (128 GB RAM).
Experiment Setup Yes For each noise model and choice of ", we produced a sample S of size m = 100. We then computed allocations maximizing EFrate S and minimizing envy S.