RecoNet: An Interpretable Neural Architecture for Recommender Systems

Authors: Francesco Fusco, Michalis Vlachos, Vasileios Vasileiadis, Kathrin Wardatzky, Johannes Schneider

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate two aspects of Reco Net: (a) its predictive power as a recommender system and (b) the validity and quality of the explanations given. We focus our experiments on the implicit recommendation setting because it is the most prevalent (and most difficult) setting. Implicit or one-class problems come with binary ratings, where a value of 1 denotes that the user bought/liked/viewed an item, and 0 denotes an unknown. We use two datasets for our experiments: a proprietary implicit dataset from our institution containing 550,000 purchases (130,000 users and 500 items), called B2B. The publicly available Movie Lens dataset with 1 million ratings (approx. 6,000 users and 4,000 movies).
Researcher Affiliation Collaboration Francesco Fusco1 , Michalis Vlachos1,2 , Vasileios Vasileiadis3 , Kathrin Wardatzky1 and Johannes Schneider4 1IBM Research AI 2University of Lausanne 3ip Quants AG 4University of Liechtenstein
Pseudocode No The paper describes the model and processes in text and equations, but it does not include a structured pseudocode block or algorithm labeled as such.
Open Source Code No The paper mentions 'We created a demo of Reco Net here for the Movie Lens data.' but does not provide a direct link to the source code or an explicit statement about its public availability.
Open Datasets Yes We use two datasets for our experiments: a proprietary implicit dataset from our institution containing 550,000 purchases (130,000 users and 500 items), called B2B. The publicly available Movie Lens dataset with 1 million ratings (approx. 6,000 users and 4,000 movies).
Dataset Splits Yes We split the data using a 60 20 20 split for training, developing and testing, respectively.
Hardware Specification No The paper mentions allocating 'at least two days of computational time per technique for hyper-parameter searching' but does not provide specific hardware details such as GPU or CPU models.
Software Dependencies No The paper mentions using the 'Turicreate package' and 'py FM' but does not specify their version numbers or any other software dependencies with versions.
Experiment Setup No The paper mentions performing 'an extensive grid search to determine good hyperparameters' but does not specify the actual hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) used in the experiments.