Expressive Recommender Systems through Normalized Nonnegative Models

Authors: Cyril Stark

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we evaluate the last remaining criterion predictive power in numerical experiments. We show that in meanaverage-error, NNMs outperform methods like SVD++ (Koren 2008) on Movie Lens datasets. This indicates that the high level of interpretability of NNMs comes not at the price of sacrificing predictive power.
Researcher Affiliation Academia Cyril J. Stark Massachusetts Institute of Technology 77 Massachusetts Avenue, 6-304 Cambridge, MA 02139-4307, USA cyril@mit.edu
Pseudocode Yes Algorithm 1 Alternating constrained least squares for NNM
Open Source Code No The paper does not provide concrete access to source code for the methodology it describes. It mentions using the Lib Rec library for evaluating known methods, but not for their own NNM implementation.
Open Datasets Yes We evaluate predictive power and interpretability of NNMs at the example of the omnipresent Movie Lens (ML) 100K and 1M datasets.1 (Footnote 1: http://grouplens.org/datasets/movielens/)
Dataset Splits Yes We evaluate meansquared-error (MAE) and root-mean-squared-error (RMSE) through 5-fold cross validation with 0.8-to-0.2 data splitting.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions using "Matlab using cvx (Grant and Boyd 2014) calling SDPT3 (Toh, Todd, and T ut unc u 1999)". However, it does not provide specific version numbers for Matlab, cvx, or SDPT3.
Experiment Setup Yes We evaluate meansquared-error (MAE) and root-mean-squared-error (RMSE) through 5-fold cross validation with 0.8-to-0.2 data splitting. All results were computed using 16 iterations. Fix D (e.g., by cross validation).