Low-Rank Matrix Approximation with Stability

Authors: Dongsheng Li, Chao Chen, Qin Lv, Junchi Yan, Li Shang, Stephen Chu

ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on real-world datasets demonstrate that the proposed work can achieve better prediction accuracy compared with both state-of-the-art low-rank matrix approximation methods and ensemble methods in recommendation task.
Researcher Affiliation Collaboration IBM Research China, 399 Keyuan Road, Shanghai P. R. China 201203 Tongji University, 4800 Caoan Road, Shanghai P.R. China 201804 University of Colorado Boulder, Boulder, Colorado USA 80309
Pseudocode Yes Algorithm 1 The SMA Learning Algorithm
Open Source Code Yes The source codes of all the experiments are publicly available 1. 1https://github.com/ldscc/Stable MA.git.
Open Datasets Yes Two widely used datasets are adopted to evaluate SMA: Movie Lens 10M ( 70k users, 10k items, 107 ratings) and Netflix ( 480k users, 18k items, 108 ratings).
Dataset Splits No For each dataset, we randomly split it into training and test sets and keep the ratio of training set to test set as 9:1. (It does not explicitly mention a validation set split.)
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory specifications) used for running its experiments.
Software Dependencies No The paper mentions using a 'stochastic gradient descent method' but does not list specific software dependencies (libraries, frameworks) with version numbers.
Experiment Setup Yes In this study, we use learning rate v = 0.001 for stochastic gradient decent method, µ1 = 0.06 for L2-regularization coefficient, ϵ = 0.0001 for gradient descent convergence threshold, and T = 250 for maximum number of iterations.