Robust Subspace Segmentation by Simultaneously Learning Data Representations and Their Affinity Matrix
Authors: Xiaojie Guo
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results on both synthetic and real data demonstrate the efficacy of the proposed method and its superior performance over the state-of-the-art alternatives. |
| Researcher Affiliation | Academia | Xiaojie Guo State Key Laboratory of Information Security Institute of Information Engineering, Chinese Academy of Sciences xj.max.guo@gmail.com |
| Pseudocode | Yes | Algorithm 1: Proposed Robust Subspace Segmentation |
| Open Source Code | No | The paper mentions that codes for *compared methods* were downloaded from authors' webpages, but there is no statement about the availability of the proposed method's source code. |
| Open Datasets | Yes | We compare the proposed method with other state-of-the-art methods for face clustering on the Extended Yale B dataset [Lee et al., 2005]. Further, we compare the performance of SSC, LRR, LSR, CASS and our method on the USPS dataset5, which consists of 10 classes corresponding to 10 handwritten digits, 0 9. 5www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/multiclass.html. Moreover, we attempt to test the abilities of different approaches on a more challenging dataset UMIST [Graham and Allinson, 1998]. |
| Dataset Splits | No | The paper mentions using "average segmentation accuracies over 10 independent trials" but does not provide specific training, validation, or test dataset splits (e.g., percentages or sample counts) for reproducibility. |
| Hardware Specification | No | The paper mentions "Our algorithm takes 4s to finish the computation on our PC", which is a general statement and does not provide specific hardware details (e.g., CPU/GPU models, memory). |
| Software Dependencies | No | The paper mentions using "Normalized Cuts" and other methods' codes downloaded from authors' webpages, but it does not specify any software dependencies with version numbers for its own implementation. |
| Experiment Setup | Yes | To simplify our parameters, we let λ1 = λ2 = λ3 = ˆλ {0.1, 0.2, ..., 1.0}, although the simplification may very likely exclude the best performance for our method. Based on this testing, we will fix k = 3 for our method for the rest experiments. |