Subspace Clustering via New Low-Rank Model with Discrete Group Structure Constraint
Authors: Feiping Nie, Heng Huang
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Both synthetic and real world datasets demonstrate our proposed model s effectiveness. 5 Experiments 5.1 Experimental Results on Synthetic Data 5.2 Experiments on Real World Data |
| Researcher Affiliation | Academia | Department of Computer Science and Engineering University of Texas at Arlington feipingnie@gmail.com, heng@uta.edu |
| Pseudocode | Yes | Algorithm 1 Algorithm to solve problem (5). Algorithm 2 Algorithm to solve problem (19). |
| Open Source Code | No | No explicit statement or link to open-source code for the methodology is provided. |
| Open Datasets | Yes | We first test our model on the Hopkins 155 motion dataset. This dataset consists of 155 sequences... (cites [Costeira et al., 1997]), face datasets including JAFFE (cites [Lyons et al., 1998]), MSRA (cites [Liu et al., 2007]), XM2VTS [XM2, ] and another human palm image dataset called PALM [Yan et al., 2007] are used. [XM2, ] links to http://www.ee.surrey.ac.uk/cvssp/xm2vtsdb/. |
| Dataset Splits | No | The paper discusses data preprocessing and initialization methods but does not provide specific training/validation/test dataset splits (e.g., percentages or sample counts) or mention cross-validation. |
| Hardware Specification | No | No specific hardware details (e.g., CPU/GPU models, memory specifications, or cloud instance types) used for running experiments are mentioned. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x) are mentioned for replication. |
| Experiment Setup | Yes | We run our algorithm with ten different initializations and select the results with the best objective values. We use the PCA to project the coordinates in each sequence into the dimensions ranging from 5 to 20. Then we use K-means method to get our initialized Gi(1 i k). In the data, we also add 5% level noises to deviate from the subspaces. |