High-Rank Matrix Completion and Clustering under Self-Expressive Models
Authors: Ehsan Elhamifar
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We study the performance of our algorithm for completion and clustering of synthetic and real data. We implement (10) and (12) with PN j=1 PN i=1 |cij| instead of the group-sparsity term using the ADMM framework [34, 35]. Unless stated otherwise, we set λ = 0.01 and γ = 0.1. |
| Researcher Affiliation | Academia | E. Elhamifar College of Computer and Information Science Northeastern University Boston, MA 02115 eelhami@ccs.neu.edu |
| Pseudocode | No | The paper describes the proposed algorithms verbally and mathematically, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | We consider the problem of motion segmentation [37, 38] with missing entries on the Hopkins 155 dataset, with 155 sequences of 2 and 3 motions. (...) We use the CMU Mocap dataset, where each data point corresponds to measurements from n sensors at a particular time instant. |
| Dataset Splits | No | The paper describes how missing entries are introduced into datasets (e.g., 'ρ fraction of entries... uniformly at random'), but does not provide specific train, validation, or test dataset splits, percentages, or sample counts. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory used for running its experiments. |
| Software Dependencies | No | The paper mentions using the ADMM framework [34, 35] for implementation but does not specify any software names with version numbers (e.g., Python, MATLAB, or specific libraries like PyTorch, NumPy) to reproduce the experimental environment. |
| Experiment Setup | Yes | Unless stated otherwise, we set λ = 0.01 and γ = 0.1. However, the results are stable for λ [0.005, 0.05] and γ [0.01, 0.5]. |