Mitigating Manipulation in Peer Review via Randomized Reviewer Assignments

Authors: Steven Jecmen, Hanrui Zhang, Ryan Liu, Nihar Shah, Vincent Conitzer, Fei Fang

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we test our algorithms on datasets from past conferences and show their practical effectiveness (Section 6).
Researcher Affiliation Academia Steven Jecmen Carnegie Mellon University sjecmen@cs.cmu.edu Hanrui Zhang Duke University hrzhang@cs.duke.edu Ryan Liu Carnegie Mellon University ryanliu@andrew.cmu.edu Nihar B. Shah Carnegie Mellon University nihars@cs.cmu.edu Vincent Conitzer Duke University conitzer@cs.cmu.edu Fei Fang Carnegie Mellon University feif@cs.cmu.edu
Pseudocode No The paper describes algorithms verbally and provides high-level sketches (e.g., "Here we briefly sketch a simpler version of the sampling algorithm" and "We briefly sketch the sampling algorithm that realizes these results here"), but it does not include formal pseudocode blocks or algorithms labeled as such.
Open Source Code Yes All of the code for our algorithms and our empirical results is freely available online.1 https://github.com/theryanl/mitigating_manipulation_via_randomized_reviewer_ass ignment/
Open Datasets Yes We test our algorithms on several real-world datasets. The first real-world dataset is a similarity matrix recreated from ICLR 2018 data in [35]; this dataset has n = 2435 reviewers and d = 911 papers. We also run experiments on similarity matrices created from reviewer bid data for three AI conferences from Pref Lib dataset MD-00002 [47]
Dataset Splits No The paper discusses the datasets used and the loads for reviewers and papers, but it does not specify explicit training, validation, or test dataset splits (e.g., 80/10/10 split) in the typical machine learning sense. The experiments evaluate the performance of their assignment algorithms on entire datasets rather than models trained on specific splits.
Hardware Specification Yes We run all experiments on a computer with 8 cores and 16 GB of RAM, running Ubuntu 18.04 and using Gurobi 9.0.2 [48] to solve the LPs.
Software Dependencies Yes We run all experiments on a computer with 8 cores and 16 GB of RAM, running Ubuntu 18.04 and using Gurobi 9.0.2 [48] to solve the LPs.
Experiment Setup Yes As done in [35], we set loads k = 6 and ℓ= 3 for all datasets... In Figure 1a, we set all entries of the maximum-probability-matrix Q equal to the same constant value q0 (varied on the x-axis)... On ICLR, we fix q0 = 0.5 and randomly assign reviewers to subsets of size 15... we then gradually loosen the constraints on the expected number of same-subset reviewers assigned to the same paper by increasing the constant in Constraint (6) from 1 to 2 in increments of 0.1...