Strategyproof Peer Selection: Mechanisms, Analyses, and Experiments

Authors: Haris Aziz, Omer Lev, Nicholas Mattei, Jeffrey Rosenschein, Toby Walsh

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We then show, using a detailed experiment with parameter values derived from target real world domains, that our mechanism performs better on average, and in the worst case, than other strategyproof mechanisms in the literature. 5 Simulation Experiments Using Python and extending code from PREFLIB (Mattei and Walsh 2013) we have implemented the Dollar Partition, Credible Subset, Partition, Dollar Raffle, Dollar Partition Raffle, and Vanilla peer selection mechanisms. All the code developed for this project is implemented as an easily installable Python package available on Git Hub free and opensource under the BSD license. We present results on the first systematic empirical study of strategyproof selection mechanisms.
Researcher Affiliation Academia Haris Aziz Data61 and UNSW Sydney, Australia haris.aziz@nicta.com.au; Omer Lev University of Toronto Toronto, Canada omerl@cs.toronto.edu; Nicholas Mattei Data61 and UNSW Sydney, Australia nicholas.mattei@nicta.com.au; Jeffrey S. Rosenschein The Hebrew University of Jerusalem Jerusalem, Israel jeff@cs.huji.ac.il; Toby Walsh Data61 and UNSW Sydney, Australia toby.walsh@nicta.com.au
Pseudocode Yes Algorithm 1: Dollar Partition
Open Source Code Yes All the code developed for this project is implemented as an easily installable Python package available on Git Hub free and opensource under the BSD license.
Open Datasets No We first generate the scoring matrix (profile) via a two-step process using a Mallows Model to generate the underlying ordinal evaluation (Mallows 1957). The paper describes how the data is generated but does not provide access to a specific publicly available dataset used for the experiments.
Dataset Splits No The paper describes generating a scoring matrix and applying mechanisms but does not specify training, validation, and test dataset splits.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions 'Python' and 'PREFLIB' but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes Experimental Setup: Given n agents divided into l clusters with each agent performing m reviews we want to select k agents. We first generate the scoring matrix (profile) via a two-step process using a Mallows Model to generate the underlying ordinal evaluation (Mallows 1957). Mallows models are parameterized by a reference order (σ) and a dispersion parameter (φ). We use a normal distribution giving |D| = [4, 7, 15, 20, 39, 20, 15, 7, 3] and a Borda scoring function that one would expect to find in most conference reviewing F = [8, 7, 6, 5, 4, 3, 2, 1, 0] corresponding to the grades G = [A+, A, B+, B, C+, C, D+, D, F]. We fixed φ = 0.1 for this testing. We report results for k = 25, n = 130, l = 5, m = 10 and m = 15.