Preference Elicitation as Average-Case Sorting
Authors: Dominik Peters, Ariel D. Procaccia5647-5655
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We also provide empirical evidence for the benefits of our approach. At the end of the paper, we test the utility of our schemes on real-world preference data from Pref Lib. In our experiments, we put voters in a random sequence, and elicit one-by-one. We find that this method saves 10.1% of queries on average over all datasets compared to eliciting with an uninformative prior (impartial culture). |
| Researcher Affiliation | Academia | Dominik Peters and Ariel D. Procaccia Harvard University {dpeters, arielpro}@seas.harvard.edu |
| Pseudocode | Yes | Algorithm 1 Distribution-Aware Insertion Sort |
| Open Source Code | No | The paper mentions using the 'choix' package: 'We calculate Plackett Luce MLEs using the choix package (https://github.com/lucasmaystre/choix).' This refers to a third-party tool used, not the authors' own source code for the methodology described in the paper. |
| Open Datasets | Yes | We apply this idea to all datasets from Pref Lib (Mattei and Walsh 2013) that contain complete strict rankings. In addition, we use the Jester dataset (Goldberg et al. 2001) of numerical ratings of jokes. |
| Dataset Splits | No | The paper describes a sequential elicitation process where models are updated after each voter. It does not specify traditional training, validation, and test splits for a fixed dataset, but rather simulates a dynamic learning process. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments (e.g., CPU, GPU models, memory). |
| Software Dependencies | No | The paper mentions 'Gurobi' and 'scipy' for calculations, and the 'choix package' with a URL (https://github.com/lucasmaystre/choix). However, it does not specify version numbers for Gurobi, scipy, or the 'choix' package. |
| Experiment Setup | Yes | For each dataset, we remove all votes that contain indifferences, and use the resulting profile if the number of remaining votes is at least 10 |A| and if |A| 5. We make 10 copies of each dataset and in each copy we randomly shuffle its votes. We initialize D as the uniform distribution over A!. In each run, we elicit the first 10 voters using the uniform distribution and only start using the learned model for the 11th voter onwards, to avoid overfitting at the beginning. |