Making Paper Reviewing Robust to Bid Manipulation Attacks

Authors: Ruihan Wu, Chuan Guo, Felix Wu, Rahul Kidambi, Laurens Van Der Maaten, Kilian Weinberger

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show empirically that our approach provides robustness even when dishonest reviewers collude, have full knowledge of the assignment system s internal workings, and have access to the system s inputs. In addition to being more robust, the quality of our paper review assignments is comparable to that of current, non-robust assignment approaches. We evaluate the efficacy of our system on a novel, synthetic dataset of paper bids and assignments that we developed to facilitate the study of robustness of paper-assignment systems.
Researcher Affiliation Collaboration 1Department of Computer Science, Cornell University 2Facebook AI Research 3ASAPP 4Amazon Search & AI.
Pseudocode Yes Algorithm 1 Paper assignment system that is robust against colluding bid manipulation attacks. (Located on page 4)
Open Source Code Yes For full reproducibility we release our code5 and synthetic data6 publicly and invite program chairs across disciplines to use our approach on their real bidding data. 5https://github.com/facebookresearch/secure-paper-bidding
Open Datasets Yes We construct a synthetic conference dataset from the Semantic Scholar Open Research Corpus (Ammar et al., 2018). This corpus contains publicly available academic papers annotated with attributes such as citation, venue, and field of study.
Dataset Splits No The paper describes the creation of a synthetic dataset and then uses it for experiments, but it does not specify explicit training, validation, or test dataset splits using percentages, sample counts, or references to predefined splits.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments, such as GPU models, CPU types, or cloud computing instances.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in their implementation.
Experiment Setup Yes We evaluate the effectiveness of the attack by randomly picking 400 papers from our synthetic conference dataset (see Section 5), and determine paper assignments using Eq. (1) (with R = 3 and P = 6) using relevance scores from the Neur IPS-2014 system (Lawrence, 2014). To ensure that no single reviewer has disproportionate influence on the model, we restrict the maximum number of positive bids from a reviewer to be at most U = 60 and subsample bids of a reviewer whenever the number of bids exceeds U.