Ranking Distributions based on Noisy Sorting

Authors: Adil El Mesaoudi-Paul, Eyke Hüllermeier, Robert Busa-Fekete

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimentally, we show that the models perform very well in terms of goodness of fit, compared to existing models for ranking data.
Researcher Affiliation Collaboration 1Heinz Nixdorf Institute and Department of Computer Science, Paderborn University, Germany 2Yahoo Research, New York, USA.
Pseudocode Yes Algorithm 1 Metropolis-Hastings with Mallows proposal
Open Source Code No The paper does not provide any concrete access (e.g., repository link, explicit statement of code release) to the source code for the methodology described.
Open Datasets Yes To investigate the performance of our new model and the effectiveness of parameter estimation, we conducted experiments on 213 real-world data sets from the Pref Lib repository (http://www.preflib.org).
Dataset Splits Yes In a first setting, we fit the models to the entire data, while in a second setting, we only fit to half of the data and determine divergence on the other half (averaging over 20 random splits).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with versions).
Experiment Setup No The paper describes the general experimental settings, such as fitting models and using K-L divergence, but does not provide specific hyperparameters (e.g., learning rate, batch size, optimizer settings) or system-level training configurations.