Sparse Subspace Clustering with Missing Entries

Authors: Congyuan Yang, Daniel Robinson, Rene Vidal

ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on synthetic and real data show the advantages and disadvantages of the proposed methods, which all outperform the natural approach (low-rank matrix completion followed by sparse subspace clustering) when the data matrix is high-rank or the percentage of missing entries is large. 4. Experiments In this section, we evaluate the performance of MC+SSC, ZF+SSC, SSC-EWZF, SSC-EC, SSC-CEC, and BCDS on both synthetic data and the Hopkins 155 motion segmentation dataset (Tron & Vidal, 2007).
Researcher Affiliation Academia Congyuan Yang YANGCY@JHU.EDU Daniel Robinson DANIEL.P.ROBINSON@JHU.EDU Ren e Vidal RVIDAL@CIS.JHU.EDU Johns Hopkins University, 3400 N Charles St., Baltimore, MD, USA
Pseudocode No The paper describes algorithms in text, but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access to source code for the methodology described.
Open Datasets Yes We evaluate different methods on the Hopkins 155 data set, which contains 155 video sequences with 2 or 3 moving objects. (Tron & Vidal, 2007)
Dataset Splits No The paper does not explicitly define or provide details about a validation set or split for hyperparameter tuning.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions methods like ADMM and LASSO, but does not provide specific software dependencies or version numbers.
Experiment Setup Yes All algorithms involve a penalty parameter λ that should be carefully chosen so as to balance reconstruction error and sparsity: a small λ may lead to sparse solutions, but a large reconstruction error, while a large λ may give very good reconstruction, but non sparse solutions. In (Elhamifar & Vidal, 2013), an adaptive choice for λ in (5) is given for a complete data matrix X as: λ = α/ min j max i =j |X X|ij, where α 1 is a new tuning parameter.