Composite Marginal Likelihood Methods for Random Utility Models
Authors: Zhibing Zhao, Lirong Xia
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on synthetic data show that RBCML for Gaussian RUMs achieves better statistical efficiency and computational efficiency than the state-of-the-art algorithm and our RBCML for the Plackett-Luce model provides flexible tradeoffs between running time and statistical efficiency. 9. Experiments We compare RBCML with state-of-the-art algorithms for both Gaussian RUMs (GMM algorithm by Azari Soufiani et al. (2014)) and the Plackett-Luce model (the I-LSR algorithm by Maystre & Grossglauser (2015) and the consistent rank-breaking algorithm by Khetan & Oh (2016b)). In both experiments, we generate synthetic datasets of full rankings over m = 10 alternatives. |
| Researcher Affiliation | Academia | 1Computer Science Department, Rensselaer Polytechnic Institute, Troy, NY, USA. Correspondence to: Zhibing Zhao <zhaoz6@rpi.edu>, Lirong Xia <xial@cs.rpi.edu>. |
| Pseudocode | Yes | Algorithm 1 Adaptive RBCML |
| Open Source Code | No | The paper does not contain any explicit statement about providing open-source code for the described methodology or a link to a code repository. |
| Open Datasets | No | In both experiments, we generate synthetic datasets of full rankings over m = 10 alternatives. For Gaussian RUMs, the utility distribution of ai is N(θi, 1). No access information is provided for this generated data. |
| Dataset Splits | No | The paper generates synthetic datasets and reports results over 50000 trials, but it does not specify explicit training, validation, or test dataset splits (e.g., percentages, sample counts, or predefined splits) for reproducibility. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., CPU, GPU models, or memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependency details with version numbers (e.g., library names and versions like PyTorch 1.9 or specific solver versions). |
| Experiment Setup | Yes | In both experiments, we generate synthetic datasets of full rankings over m = 10 alternatives. The ground truth parameter is generated uniformly at random between 0 and 5 and shifted s.t. θ10 = 0. For Gaussian RUMs, the utility distribution of ai is N(θi, 1). ... We use a two-step (T = 2 in Algorithm 1) RBCML... For any pair of alternatives ai1 and ai2, we let wi1i2 = wi2i1 = 1 |θi1 θi2|+4. |