Selective Sampling-based Scalable Sparse Subspace Clustering
Authors: Shin Matsushima, Maria Brbic
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results demonstrate effectiveness of our approach. |
| Researcher Affiliation | Academia | Shin Matsushima University of Tokyo smatsus@graco.c.u-tokyo.ac.jp Maria Brbi c Stanford University mbrbic@cs.stanford.edu |
| Pseudocode | Yes | Pseudocode of representation learning step is summarized in Algorithm 1. |
| Open Source Code | Yes | Our code is available at https://github. com/smatsus/S5C. |
| Open Datasets | Yes | We verify the effectiveness of S5C on six benchmark datasets including face image dataset Yale B [36, 37], motion segmentation Hopkins 155 [38], object recognition datasets COIL-100 [39] and CIFAR-10 [40], handwritten digits dataset MNIST [41], letter recognition dataset of different fonts Letter-rec [42], and handwritten character recognition dataset Devanagari [43]. |
| Dataset Splits | Yes | The summary of datasets and details of experimental setup are provided in Appendix E. For Yale B, we used the standard splits as in [24, 25]. |
| Hardware Specification | No | No specific hardware details (like CPU/GPU models, memory size, or specific machine names) used for running experiments were provided in the paper. |
| Software Dependencies | No | The paper mentions software like GLMNET [31] and coordinate descent methods [32], but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | In all experiments, we use only one random subsample, i.e., |I| = 1. For SSSC, we used batch size of 200, number of iterations T=10, λ=0.01. |