Scalable Algorithms for Tractable Schatten Quasi-Norm Minimization

Authors: Fanhua Shang, Yuanyuan Liu, James Cheng

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on synthetic and real-world data show that our algorithms are more accurate than the state-of-the-art methods, and are orders of magnitude faster. We now evaluate both the effectiveness and efficiency of our algorithms for solving matrix completion problems, such as collaborative filtering and image recovery.
Researcher Affiliation Academia Fanhua Shang, Yuanyuan Liu, James Cheng Department of Computer Science and Engineering, The Chinese University of Hong Kong {fhshang, yyliu, jcheng}@cse.cuhk.edu.hk
Pseudocode Yes Algorithm 1 Solving (3) via PALM Input: PΩ(D), the given rank d and λ. Initialize: U0, V0, ε and k = 0. 1: while not converged do 2: Update lg k+1 and Uk+1 by (9) and (6), respectively. 3: Update lh k+1 and Vk+1 by (9) and (7), respectively. 4: Check the convergence condition, max{ Uk+1 Uk F , Vk+1 Vk F } < ε. 5: end while Output: Uk+1, Vk+1.
Open Source Code No The paper provides links to third-party code used for comparison methods (e.g., IRuc Lq, IRNN) but does not state that their own implementation code is available or provide a link to it.
Open Datasets Yes We tested our algorithms on three real-world recommendation system data sets: the Movie Lens1M, Movie Lens10M4 and Netflix datasets (KDDCup 2007).
Dataset Splits No We randomly chose 50%, 70% and 90% as the training set and the remaining as the testing set, and the experimental results are reported over 10 independent runs. The paper does not explicitly mention a separate validation set.
Hardware Specification Yes All experiments were conducted on an Intel Xeon E7-4830V2 2.20GHz CPU with 64G RAM.
Software Dependencies No The paper mentions various algorithms and methods (e.g., IRuc Lq, IRNN, APGL, LMa Fit) but does not specify the version numbers for the software libraries or frameworks used for their implementation (e.g., Python, MATLAB, PyTorch, TensorFlow).
Experiment Setup Yes For our algorithms, we set the regularization parameter λ = 5 or λ = 100 for noisy synthetic and real-world data, respectively. Note that the rank parameter d is estimated by the strategy in (Wen, Yin, and Zhang 2012). In addition, only 20% or 30% entries of D are sampled uniformly at random as training data, i.e., sampling ratio (SR)=20% or 30%. The rank parameter d of our algorithms is set to 1.25r as in (Wen, Yin, and Zhang 2012).