Multi-Task Learning for Subspace Segmentation
Authors: Yu Wang, David Wipf, Qing Ling, Wei Chen, Ian Wassell
ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Theoretical analysis and empirical tests are provided to support these claims. In this section we present empirical results designed to highlight the utility of MTSC. For this purpose we compare with a suite of recent competing algorithms implementing SSC (Elhamifar & Vidal, 2013), LRR (Liu et al., 2013), LSR (Lu et al., 2012), and CASS (Lu et al., 2013), in each case using the authors original code. |
| Researcher Affiliation | Collaboration | Computer Laboratory, University of Cambridge, Cambridge, UK Microsoft Research, Beijing, China University of Science and Technology of China, Hefei, Anhui, China |
| Pseudocode | No | The paper describes algorithmic steps and mentions 'update rules contained in the supplementary file', but it does not include any pseudocode or a clearly labeled algorithm block in the main text. |
| Open Source Code | No | The paper states that for comparison, 'the authors original code' was used for competing algorithms, but it does not explicitly state that the source code for their proposed method (MTSC) is publicly available or provide a link to it. |
| Open Datasets | Yes | Motion Segmentation Data: We next present evaluations using the Hopkins 155 Motion Database (Elhamifar & Vidal, 2013). |
| Dataset Splits | No | The paper does not provide specific training, validation, or test dataset splits (e.g., exact percentages, absolute sample counts, or cross-validation setup) needed for reproducing the data partitioning for its experiments. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU/CPU models, processor types, memory amounts, or cloud instance specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'the authors original code' for competing algorithms but does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or frameworks with specific versions). |
| Experiment Setup | No | The paper mentions 'using the noise models provided in the original code and tuning parameters adjusted from default settings' for the Hopkins data, but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or other concrete experimental setup details for their proposed method. |