Algebraic Variety Models for High-Rank Matrix Completion

Authors: Greg Ongie, Rebecca Willett, Robert D. Nowak, Laura Balzano

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show the proposed algorithm is able to recover synthetically generated data up to the predicted sampling complexity bounds, and outperforms standard low rank matrix completion and subspace clustering algorithms in experiments with real data.
Researcher Affiliation Academia 1Department of EECS, University of Michigan, Ann Arbor, Michigan, USA 2Department of ECE, University of Wisconsin, Madison, Wisconsin, USA.
Pseudocode Yes Algorithm 1 Kernelized IRLS to solve (VMC).
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for their methodology is publicly available.
Open Datasets Yes using the Hopkins 155 dataset (Tron & Vidal, 2007).", "CMU Mocap database3 (subject 56, trial 6).", "3http://mocap.cs.cmu.edu
Dataset Splits No The paper describes undersampling data for matrix completion but does not provide specific percentages or sample counts for training, validation, or test dataset splits.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU or GPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper does not specify version numbers for any software dependencies, libraries, or programming languages used in the experiments.
Experiment Setup Yes Empirically, we find that setting γ0 = (0.1)dλmax, where λmax is the largest eigenvalue of the kernel matrix obtained from the initialization, and η = 1.01 work well in a variety of settings. For all our experiments in Section 5 we fix p = 1/2, which was found to give the best matrix recovery results for synthetic data. We also use a zero-filled initialization X0 in all cases.