A One-Size-Fits-All Approach to Improving Randomness in Paper Assignment

Authors: Yixuan Xu, Steven Jecmen, Zimeng Song, Fei Fang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show theoretically and experimentally that our method outperforms currently-deployed methods for randomized paper assignment on several intuitive randomness metrics, demonstrating that the randomized assignments produced by our method are general-purpose.
Researcher Affiliation Collaboration Yixuan Even Xu1 Steven Jecmen2 Zimeng Song3 Fei Fang2 1Tsinghua University 2Carnegie Mellon University 3Independent Researcher
Pseudocode Yes Algorithm 1: Network-Flow-Based Approximation of PM
Open Source Code Yes All source code is released at https://github.com/Yixuan Even Xu/perturbed-maximization
Open Datasets Yes The first dataset is bidding data from the AAMAS 2015 conference [39]... The second dataset contains text-similarity scores recreated from the ICLR 2018 conference with np = 911, nr = 2435 [26]. These scores were computed by comparing the text of each paper with the text of each reviewer s past work; we directly use them as the similarity matrix. ... In Appendix A.3, we also test our algorithm on four additional datasets from [39].
Dataset Splits No The paper describes the datasets used (AAMAS 2015, ICLR 2018, and Preflib datasets) and the parameters ℓp and ℓr for assigning papers, but does not provide specific details on how the datasets were split into training, validation, or test sets for experiment reproduction.
Hardware Specification No The paper states 'all experiments are done on a server with 56 cores and 504G RAM, running Ubuntu 20.04.6,' but it does not specify exact CPU or GPU models or other detailed hardware components.
Software Dependencies Yes We implement PLRA and two versions of PM (PM-E and PM-Q) using commercial optimization solver Gurobi 10.0 [36].
Experiment Setup Yes For each algorithm on each dataset, we use a principled method (Appendix A.2) to find 8 sets of hyperparameters that produce solutions with at least {80%, 85%, 90%, 95%, 98%, 99%, 99.5%, 100%} of the maximum possible quality. The constraints are set as ℓp = 3, ℓr = 6 for ICLR 2018 as was done in [26] and ℓp = 3, ℓr = 12 for AAMAS 2015 for feasibility.