Learning Mixtures of Random Utility Models with Features from Incomplete Preferences

Authors: Zhibing Zhao, Ao Liu, Lirong Xia

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on synthetic data demonstrate the effectiveness of MLE on PL with features with tradeoffs between statistical efficiency and computational efficiency. Our experiments on real-world data show the prediction power of PL with features and its mixtures.
Researcher Affiliation Collaboration 1Microsoft, 555 110TH Ave NE, Bellevue, WA, 98004 2Rensselaer Polytechnic Institute, 110 8th Street, Troy, NY, 12180
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions a full version on arXiv but does not explicitly state that the source code for the methodology is released or provide a direct link to a code repository.
Open Datasets No The paper mentions using 'synthetic data' and the 'sushi dataset' but does not provide a specific link, DOI, or formal citation for accessing these datasets.
Dataset Splits No The paper does not explicitly state the training, validation, or test dataset splits (e.g., percentages, sample counts, or cross-validation setup).
Hardware Specification Yes MLE for PLX -TO was implemented in MATLAB with the built-in fminunc function and tested on a Ubuntu Linux server with Intel Xeon E5 v3 CPUs each clocked at 3.50 GHz.
Software Dependencies No The paper mentions 'MATLAB with the built-in fminunc function' but does not specify the version number for MATLAB.
Experiment Setup Yes Fix m = 10 and d = 10. For each agent and each alternative, the feature vector is generated in [ 1, 1] uniformly at random. Each component in β is generated uniformly at random in [ 2, 2]. MLE for PLX -TO was implemented in MATLAB with the built-in fminunc function and tested on a Ubuntu Linux server with Intel Xeon E5 v3 CPUs each clocked at 3.50 GHz.