A Bayesian Latent Variable Model of User Preferences with Item Context

Authors: Aghiles Salah, Hady W. Lauw

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results on real-world datasets show evident performance improvements over strong factorization models.
Researcher Affiliation Academia Aghiles Salah and Hady W. Lauw School of Information Systems, Singapore Management University, Singapore {asalah, hadywlauw}@smu.edu.sg
Pseudocode Yes Algorithm 1 Variational inference for C2PF.
Open Source Code No The paper does not contain any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We use six datasets from Amazon.com2, provided by Mc Auley et al.; Mc Auley et al. [2015b; 2015a]. These datasets include both the user-item preferences and the Also Viewed lists that we treat as the item contexts. (...) 2http://jmcauley.ucsd.edu/data/amazon/
Dataset Splits No The paper states 'For each dataset, we randomly select 80% of the ratings as training data and the remaining 20% as test data' but does not explicitly mention a separate validation set or split for hyperparameter tuning.
Hardware Specification No The paper discusses computational complexity but does not provide specific details about the hardware (e.g., GPU/CPU models, memory, cloud instances) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with their version numbers required to reproduce the experiments.
Experiment Setup Yes For most experiments, we set the number of latent components K to 100. (...) To encourage sparse latent representations, we set αθ = αβ = αξ = (0.3, 0.3) resulting in exponentially shaped Gamma distributions with mean equal to 1. We further set δ = (2, 5) and αs κ = 2, fixing the prior mean over the context effects to 0.5. (...) We initialize the Gamma variational parameters, λs and λr, to a small random perturbation of the corresponding prior parameters. (...) To set the different hyperparameters of MCF, we follow the same strategy, grid search, as in [Park et al., 2017].