Voting Rules As Error-Correcting Codes

Authors: Ariel Procaccia, Nisarg Shah, Yair Zick

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results from real data show that our approach produces significantly more accurate rankings than alternative approaches.
Researcher Affiliation Academia Ariel D. Procaccia Carnegie Mellon University arielpro@cs.cmu.edu Nisarg Shah Carnegie Mellon University nkshah@cs.cmu.edu Yair Zick Carnegie Mellon University yairzick@cmu.edu
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing open-source code or a link to a code repository for their methodology.
Open Datasets Yes Mao, Procaccia, and Chen (2013) collected these datasets dots and puzzle via Amazon Mechanical Turk.
Dataset Splits No The paper describes the datasets and their use in evaluation but does not specify training, validation, or test splits in terms of percentages or counts for model training or tuning.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes We use the average error in a profile as the bound t given to OPTd, i.e., we compute OPTd(t , π) on profile π where t = d(π, σ ). [...] To synchronize the results across different profiles, we use r = (bt MAD)/(t MAD), where MAD is the minimum average distance of any ranking from the votes in a profile, that is, the average distance of the ranking returned by MINISUMd from the input votes. For all profiles, r = 0 implies bt = MAD (the smallest value that admits a possible ground truth) and r = 1 implies bt = t (the true average error). In our experiments we use r [0, 2]; here, bt is an overestimate of t for r (1, 2] (a valid upper bound on t ), but an underestimate of t for r [0, 1) (an invalid upper bound on t ).