Obvious Strategyproofness Needs Monitoring for Good Approximations

Authors: Diodato Ferraioli, Carmine Ventre

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We prove a number of bounds on the approximation guarantee of OSP mechanisms, which show that OSP can come at a significant cost. However, rather surprisingly, we prove that OSP mechanisms can return optimal solutions when they use monitoring a novel mechanism design paradigm that introduces a mild level of scrutiny on agents declarations (Kov acs, Meyer, and Ventre 2015).
Researcher Affiliation Academia Diodato Ferraioli DIEM, Universit a degli Studi di Salerno, Italy dferraioli@unisa.it Carmine Ventre CSEE, University of Essex, UK carmine.ventre@gmail.com
Pseudocode No The information is insufficient. While the paper describes the steps of mechanisms in prose, it does not present them in a structured pseudocode or algorithm block format.
Open Source Code No The information is insufficient. The paper does not provide any statements about releasing source code or links to a code repository for the methodology described.
Open Datasets No The information is insufficient. The paper is theoretical and does not describe experiments performed on specific datasets, thus no information on public dataset availability is provided.
Dataset Splits No The information is insufficient. The paper is theoretical and does not involve empirical experiments with dataset splits, so there is no mention of training, validation, or test data splits.
Hardware Specification No The information is insufficient. The paper is theoretical and does not discuss any specific hardware used for experiments.
Software Dependencies No The information is insufficient. The paper is theoretical and does not mention any specific software dependencies or their version numbers.
Experiment Setup No The information is insufficient. The paper is theoretical and does not describe any experimental setup details, hyperparameters, or system-level training settings.