A Unified Framework for Structured Low-rank Matrix Learning

Authors: Pratik Jawanpuria, Bamdev Mishra

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on problems such as standard/robust/non-negative matrix completion, Hankel matrix learning and multi-task learning demonstrate the efficacy of our approach. In this section, we evaluate the generalization performance as well as computational efficiency of our approach against state-of-the-art in different applications.
Researcher Affiliation Industry Microsoft, India. Correspondence to: Pratik Jawanpuria <pratik.jawanpuria@microsoft.com>, Bamdev Mishra <bamdevm@microsoft.com>.
Pseudocode Yes Algorithm 1 Proposed firstand second-order algorithms for (3)
Open Source Code Yes Our codes are available at https://pratikjawanpuria.com/.
Open Datasets Yes Netflix (Recht and R e, 2013), Movie Lens10m (ML10m), and Movie Lens20m (ML20m) (Harper and Konstan, 2015).
Dataset Splits No For every data set, we create five random 80/20 train/test splits. (No explicit mention of a validation split.)
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments.
Software Dependencies Yes All our algorithms are implemented using the Manopt toolbox (Boumal et al., 2014).
Experiment Setup Yes For every split, the regularization parameters for respective algorithms are cross-validated to obtain their best performance. All the fixed algorithms (R3MC, LMa Fit, MMBS, RTRMC, RSLM) are provided the rank r = 10. The rank r for both RSLM and RMC is fixed at r = 10.