Preference-Based Rank Elicitation using Statistical Models: The Case of Mallows
Authors: Robert Busa-Fekete, Eyke Huellermeier, Balázs Szörényi
ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental studies presented in this section are mainly aimed at showing advantages of our approach in situations where its model assumptions are indeed valid. To this end, we work with synthetic data. Yet, experiments with real data are presented in the supplementary material. |
| Researcher Affiliation | Academia | 1MTA-SZTE Research Group on Artificial Intelligence, Tisza Lajos krt. 103., H-6720 Szeged, Hungary 2Department of Computer Science, University of Paderborn, Warburger Str. 100, 33098 Paderborn, Germany 3INRIA Lille Nord Europe, Seque L project, 40 avenue Halley, 59650 Villeneuve d Ascq, France |
| Pseudocode | Yes | Algorithm 1 MALLOWSMPI(δ), Algorithm 2 MALLOWSMPR(δ), Procedure 3 MMREC(r, r0, δ, i, j), Procedure 4 MALLOWSMERGE(r, r0, δ, i, k, j), Algorithm 5 MALLOWSKLD(δ, ) |
| Open Source Code | No | The paper does not provide an explicit statement or link to its open-source code. |
| Open Datasets | No | The paper mentions working with 'synthetic data' and that 'experiments with real data are presented in the supplementary material' but does not specify any publicly available datasets by name or provide access information (link, DOI, citation) for them in the main text. |
| Dataset Splits | No | The paper does not specify how datasets were split into training, validation, and test sets or provide percentages/counts for such splits. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, or specific solvers). |
| Experiment Setup | No | While the paper describes algorithm parameters like δ, it does not specify concrete experimental setup details such as hyperparameter values (e.g., learning rate, batch size) or specific training configurations for a learning model. |