Axioms for Learning from Pairwise Comparisons

Authors: Ritesh Noothigattu, Dominik Peters, Ariel D. Procaccia

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We show that a large class of random utility models (including the Thurstone Mosteller Model), when estimated using the MLE, satisfy a Pareto efficiency condition. These models also satisfy a strong monotonicity property, which implies that the learning process is responsive to input data. On the other hand, we show that these models fail certain other consistency conditions from social choice theory, and in particular do not always follow the majority opinion.
Researcher Affiliation Academia Ritesh Noothigattu Carnegie Mellon University riteshn@cmu.edu Dominik Peters Harvard University dpeters@seas.harvard.edu Ariel D. Procaccia Harvard University arielpro@seas.harvard.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access information (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper.
Open Datasets No The paper describes generating synthetic data for illustrative examples ('generated random datasets', 'sampling uniformly over T') but does not use or provide concrete access information for a publicly available or open dataset for training or evaluation.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) as it primarily presents theoretical results with illustrative examples.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its analyses or computations.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the analyses.
Experiment Setup No The paper does not contain specific experimental setup details (concrete hyperparameter values, training configurations, or system-level settings) as it focuses on theoretical analysis rather than empirical experimentation.