Learning Plackett-Luce Mixtures from Partial Preferences

Authors: Ao Liu, Zhibing Zhao, Chao Liao, Pinyan Lu, Lirong Xia4328-4335

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on synthetic data show that the algorithm with Gibbs sampler outperforms that with GRIM-MCMC. Experiments on real-world data show that the likelihood of test dataset increases when (i) partial orders provide more information; or (ii) the number of components in mixtures of Plackett Luce model increases.
Researcher Affiliation Academia Ao Liu Rensselaer Polytechnic Institute Troy, NY 12180, USA liua6@rpi.edu Zhibing Zhao Rensselaer Polytechnic Institute Troy, NY 12180, USA zhaoz6@rpi.edu Chao Liao Shanghai Jiao Tong University Shanghai 200240, China chao.liao.95@gmail.com Pinyan Lu Shanghai University of Finance and Economics Shanghai 200433, China lu.pinyan@mail.shufe.edu.cn Lirong Xia Rensselaer Polytechnic Institute Troy, NY 12180, USA xial@cs.rpi.edu
Pseudocode Yes Algorithm 1: EM Algorithm for k-PL with GRIM sampler and Gibbs sampler; Algorithm 2: Plackett-Luce GRIM; Algorithm 3: Tuned Plackett-Luce GRIM by Markov Chain MPL; Algorithm 4: Gibbs Plackett-Luce Sampler
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes The sushi data from Preflib (Mattei and Walsh 2013) consists of 5000 linear orders of 10 types of sushi. We randomly split this dataset into training data (3500 linear orders) and test data (1500 partial orders).
Dataset Splits Yes We randomly split this dataset into training data (3500 linear orders) and test data (1500 partial orders).
Hardware Specification Yes All experiments with recorded runtime were run on an Ubuntu Linux server with Intel Xeon E5 v3 CPUs each clocked at 3.50 GHz.
Software Dependencies No The paper mentions that experiments were run on an Ubuntu Linux server, but does not specify any software names with version numbers (e.g., Python, PyTorch, TensorFlow, specific libraries).
Experiment Setup Yes Two linear extensions were sampled for each partial order. We use five EM iterations for each algorithm. Values were averaged over 2000 trials. ... 6 linear extensions were generated from each partial order. We use five EM iterations for each algorithm. Values were averaged over 2000 trials.