Factor Group-Sparse Regularization for Efficient Low-Rank Matrix Recovery

Authors: Jicong Fan, Lijun Ding, Yudong Chen, Madeleine Udell

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show promising performance of factor group-sparse regularization for low-rank matrix completion and robust principal component analysis.
Researcher Affiliation Academia Jicong Fan Cornell University Ithaca, NY 14850 jf577@cornell.eduLijun Ding Cornell University Ithaca, NY 14850 ld446@cornell.eduYudong Chen Cornell University Ithaca, NY 14850 yudong.chen@cornell.eduMadeleine Udell Cornell University Ithaca, NY 14850 udell@cornell.edu
Pseudocode No The paper refers to optimization methods like ADMM and PALM, stating 'Details are in the supplement', but no pseudocode or algorithm blocks are provided in the main text.
Open Source Code Yes The MATLAB codes of the proposed methods are available at https://github.com/udellgroup/Codes-of-FGSR-for-effecient-low-rank-matrix-recovery
Open Datasets Yes We consider the Movie Lens-1M dataset [40], which consists of 1 million ratings (1 to 5) for 3900 movies by 6040 users. The movies rated by less than 5 users are deleted in this study because the corresponding ratings may never be recovered when the matrix rank is higher than 5. We randomly sample 70% or 50% of the known ratings of each user and perform matrix completion.
Dataset Splits Yes We randomly sample 70% or 50% of the known ratings of each user and perform matrix completion.
Hardware Specification No The paper discusses computational time and complexity but does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for the experiments.
Software Dependencies No The paper mentions 'MATLAB codes' and 'Riemannian pursuit code mixes C and MATLAB', but it does not specify version numbers for MATLAB, C compiler, or any other software libraries or dependencies.
Experiment Setup No The paper states, 'We choose the parameters of all methods to ensure they perform as well as possible. Details about the optimizations, parameters, evaluation metrics are in the supplement.' This indicates that specific experimental setup details, such as hyperparameters, are not provided in the main text.