VBPR: Visual Bayesian Personalized Ranking from Implicit Feedback
Authors: Ruining He, Julian McAuley
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments In this section, we perform experiments on multiple realworld datasets. |
| Researcher Affiliation | Academia | Ruining He University of California, San Diego r4he@ucsd.edu Julian Mc Auley University of California, San Diego jmcauley@ucsd.edu |
| Pseudocode | No | The paper describes the model learning process using mathematical equations and textual explanations, but no structured pseudocode or clearly labeled algorithm block is present. |
| Open Source Code | No | All of our code and datasets shall be made available at publication time so that our experimental evaluation is completely reproducible. |
| Open Datasets | Yes | The first group of datasets are from Amazon.com introduced by Mc Auley et al. (2015). ... We also introduce a new dataset from Tradesy.com... all of which shall be made available at publication time. |
| Dataset Splits | Yes | We split our data into training/validation/test sets by selecting for each user u a random item to be used for validation Vu and another for testing Tu. All remaining data is used for training. |
| Hardware Specification | No | All experiments were performed on a standard desktop machine with 4 physical cores and 32GB main memory. |
| Software Dependencies | No | The paper mentions software components like the "Caffe reference model" and "My Media Lite" but does not provide specific version numbers for these or any other ancillary software components used in the experiments. |
| Experiment Setup | Yes | All hyperparameters are tuned using a validation set as we describe in our experimental section later. On Amazon, regularization hyperparamter λΘ = 10 works the best for BPR-MF, MM-MF and VBPR in most cases. While on Tradesy.com, λΘ = 0.1 is set for BPR-MF and VBPR and λΘ = 1 for MM-MF. λE is always set to 0 for VBPR. For IBR, the rank of the Mahalanobis transform is set to 100, which is reported to perform very well on Amazon data. |