Efficient and Accurate Learning of Mixtures of Plackett-Luce Models

Authors: Duc Nguyen, Anderson Y. Zhang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on both synthetic and real datasets show that our algorithm is competitive in terms of accuracy and speed to baseline algorithms, especially on datasets with a large number of items.
Researcher Affiliation Academia Duc Nguyen1, Anderson Y. Zhang2 1 Depart of Computer and Information Science, University of Pennsylvania 2 Department of Statistics and Data Science, University of Pennsylvania
Pseudocode Yes Algorithm 1 Spectral Clustering with Adaptive Dimension Reduction, Algorithm 2 Least Squares Parameter Estimation, Algorithm 3 Spectral Initialization, Algorithm 4 Weighted Luce Spectral Ranking, Algorithm 5 Spectral EM (EM-LSR).
Open Source Code No The paper does not explicitly state that the source code for the described methodology is released or provide a direct link to a code repository.
Open Datasets Yes We include commonly used datasets in previous works such as APA, Irish Elections (West, North, Meath) and SUSHI all with n < 15. We perform additional experiments on the ML-10M movie ratings datasets (Harper and Konstan 2015).
Dataset Splits Yes We partition all the rankings with a 80-20 training-testing split; and the train rankings into 80% for inference and 20% for validation.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies Yes efficiently done using off-the-shelf solvers (Virtanen et al. 2020). Virtanen, P.; Gommers, R.; Oliphant, T. E.; Haberland, M.; Reddy, T.; Cournapeau, D.; Burovski, E.; Peterson, P.; Weckesser, W.; Bright, J.; van der Walt, S. J.; Brett, M.; Wilson, J.; Millman, K. J.; Mayorov, N.; Nelson, A. R. J.; Jones, E.; Kern, R.; Larson, E.; Carey, C. J.; Polat, I.; Feng, Y.; Moore, E. W.; Vander Plas, J.; Laxalde, D.; Perktold, J.; Cimrman, R.; Henriksen, I.; Quintero, E. A.; Harris, C. R.; Archibald, A. M.; Ribeiro, A. H.; Pedregosa, F.; van Mulbregt, P.; and Sci Py 1.0 Contributors. 2020. Sci Py 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods, 17: 261 272.
Experiment Setup Yes We set n = 100 and L = 5 while varying the number of mixture components K for different experiments. We partition all the rankings with a 80-20 training-testing split; and the train rankings into 80% for inference and 20% for validation. K is chosen using Bayesian Information Criterion (Gelman, Hwang, and Vehtari 2014) on the validation set and the log-likelihood of the final model is evaluated using the test set. To keep a fair comparison, we use spectral initialization for all algorithms.