Accelerated Inexact Soft-Impute for Fast Large-Scale Matrix Completion

Authors: Quanming Yao, James T. Kwok

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on both synthetic and large recommendation data sets show that the proposed algorithm is much faster than Soft-Impute and other state-of-the-art matrix completion algorithms.
Researcher Affiliation Academia Department of Computer Science and Engineering Hong Kong University of Science and Technology Clear Water Bay, Hong Kong
Pseudocode Yes Algorithm 1 Power Method( Z, R, ϵ). Algorithm 2 Algorithm to approximate the SVT of Zt (approx-SVT( Zt, R, λ, ϵ)). Algorithm 3 Accelerated Inexact Soft-Impute (AIS-Impute).
Open Source Code No The paper provides links to the code for *compared* methods (APG, Soft-Impute) in footnotes 1 and 2, but does not provide a link or explicit statement for the open-sourcing of the proposed AIS-Impute method.
Open Datasets Yes Movie Lens data set3 (Table 2) contains ratings (from 1 to 5) of different users on movies, and has been commonly used in matrix completion experiments [Mazumder et al., 2010; Hsieh and Olsen, 2014]. Following [Wang et al., 2014], we use 50% of the observed ratings for training, 25% for validation and the rest for testing. Footnote 3: http://grouplens.org/datasets/movielens/
Dataset Splits Yes Half of them are used for training, and the other half as validation set for parameter tuning. Testing is performed on the unobserved (missing) elements. we use 50% of the observed ratings for training, 25% for validation and the rest for testing.
Hardware Specification Yes Experiments are performed on a PC with Intel i7 CPU and 16GB RAM.
Software Dependencies No APG and AIS-Impute are in Matlab, while Soft-Impute is in R (and is called from Matlab). No specific version numbers for Matlab or R are provided.
Experiment Setup Yes Require: partially observed matrix O, parameter λ, decay parameter ν (0, 1), threshold ϵ; For further speedup, at step 5, we employ a continuation strategy as in [Toh and Yun, 2010; Mazumder et al., 2010], in which λt is initialized to a large value and then decreased gradually. Moreover, as in [O Donoghue and Candes, 2012; Nesterov, 2013], the algorithm is restarted (at step 12) if F(X) starts to increase.