Conventional Machine Learning for Social Choice
Authors: John Doucette, Kate Larson, Robin Cohen
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that suitable predictive features can be extracted from the data, and demonstrate the high performance of our new framework on the ballots from many real world elections, including comparisons with existing techniques for voting with partial orderings. |
| Researcher Affiliation | Academia | John A. Doucette, Kate Larson, and Robin Cohen David R. Cheriton School of Computer Science University of Waterloo 200 University Avenue West Waterloo, ON, Canada {j3doucet,klarson,rcohen}@uwaterloo.ca |
| Pseudocode | Yes | Algorithm 1 Algorithm for selecting a winning alternative in an election with partial ballots using imputation. |
| Open Source Code | No | The paper discusses the methodology and experimental results but does not provide any links to its own open-source code or state that it is available. |
| Open Datasets | Yes | In this section, we present the application of our imputation based approach to social choice to datasets from the preflib.org repository (Mattei and Walsh 2013). We examined data from a total of eleven elections from the Irish and Debian datasets3, which are both comprised of real-world ballots with ranked preference formats. ... 3http://www.preflib.org/election/{irish,debian}.php |
| Dataset Splits | No | The paper describes generating '100 random ablations of this ground truth set' and evaluates performance, but does not specify a training/validation/test split for the machine learning models. It discusses using a subset of the data for training a classifier within the imputation process, but not a conventional ML dataset split for model validation. |
| Hardware Specification | No | The paper mentions 'on a contemporary desktop machine' for run times, but does not provide specific hardware details such as CPU/GPU models, processor speeds, or memory. |
| Software Dependencies | No | The paper mentions using 'L1 regularized logistic regression' and 'one-vs-all classification (OVA)' but does not specify any software libraries or frameworks with their version numbers (e.g., Python, scikit-learn, PyTorch versions). |
| Experiment Setup | No | The paper states 'We used one-vs-all classification (OVA) ... with L1 regularized logistic regression as the base classifier', which describes the model. However, it does not provide specific hyperparameters such as learning rate, batch size, number of epochs, or other system-level training settings. |