Randomized Social Choice Functions under Metric Preferences

Authors: Elliot Anshelevich, John Postl

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We provide new distortion bounds for a variety of randomized mechanisms, for both general metrics and for important special cases. Our results show a sizable improvement in distortion over deterministic mechanisms. Theorem 1 The worst-case distortion of any randomized mechanism when the metric space is -decisive is at least 1 + .
Researcher Affiliation Academia Elliot Anshelevich and John Postl Rensselaer Polytechnic Institute 110 8th Street, Troy, NY 12180 eanshel@cs.rpi.edu, postlj@rpi.edu
Pseudocode Yes Algorithm 1 Optimal randomized mechanism for the decisive, 1-Euclidean space and Algorithm 2 Uncovered Set Min-Cover
Open Source Code No The paper cites a CoRR (arXiv) preprint of their own work, but does not explicitly state that source code for the described methodology is publicly available, nor does it provide a direct link to a code repository.
Open Datasets No The paper is theoretical and focuses on mathematical proofs and mechanism design within defined metric spaces and preference models, rather than using or training on specific public datasets. Therefore, it does not provide access information for a dataset.
Dataset Splits No This paper is theoretical and does not involve empirical data analysis or model training, thus it does not mention training, validation, or test splits.
Hardware Specification No The paper is theoretical and focuses on algorithm design and proofs; therefore, it does not specify any hardware used for experiments.
Software Dependencies No The paper is theoretical and does not involve specific software implementations or dependencies that would require version numbers for reproducibility.
Experiment Setup No The paper is theoretical, focusing on mathematical analysis and algorithm design rather than empirical experiments. Consequently, it does not describe experimental setup details such as hyperparameters or training configurations.