Learning Mixtures of Ranking Models

Authors: Pranjal Awasthi, Avrim Blum, Or Sheffet, Aravindan Vijayaraghavan

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We ran both the algorithms on synthetic data comprising of rankings of size n = 10. The results of the experiment for rankings of size n = 10 are in Table 1.
Researcher Affiliation Academia Pranjal Awasthi Princeton University pawashti@cs.princeton.edu Avrim Blum Carnegie Mellon University avrim@cs.cmu.edu Or Sheffet Harvard University osheffet@seas.harvard.edu Aravindan Vijayaraghavan New York University vijayara@cims.nyu.edu
Pseudocode Yes Algorithm 1 LEARN MIXTURES OF TWO MALLOWS MODELS, Input: a set S of N samples from w1M (φ1, π1) w2M (φ2, π2), Accuracy parameters ϵ, ϵ2. Algorithm 2 RECOVER-REST, Input: a set S of N samples from w1M (φ1, π1) w2M (φ2, π2), parameters ˆ w1, ˆ w2, ˆφ1, ˆφ2 and initial permutations ˆπ1, ˆπ2, and accuracy parameter ϵ.
Open Source Code No The paper does not provide any statements about releasing code, links to a code repository, or mentions of code being available in supplementary materials for the described methodology.
Open Datasets No The paper uses 'synthetic data' but does not provide any concrete access information (e.g., URL, DOI, specific repository, or citation to an established public dataset) for this data. It only describes how the data was generated.
Dataset Splits No The paper mentions generating N = 5 * 10^6 random samples but does not specify any train/validation/test dataset splits, cross-validation setup, or specific partitioning methodology for the data.
Hardware Specification Yes We also comment that our algorithm s runtime was reasonable (less than 10 minutes on a 8-cores Intel x86 64 computer).
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., names of libraries, frameworks, or solvers with their versions).
Experiment Setup Yes The weights were sampled u.a.r from [0, 1], and the φ-parameters were sampled by sampling ln(1/φ) u.a.r from [0, 5]. Using these models parameters, we generated N = 5 * 10^6 random samples. For each value of d, we ran both algorithms 20 times and counted the fraction of times on which they returned the true rankings that generated the sample.