A Joint Optimization Framework of Sparse Coding and Discriminative Clustering
Authors: Zhangyang Wang, Yingzhen Yang, Shiyu Chang, Jinyan Li, Simon Fong, Thomas S Huang
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on several benchmark datasets verify remarkable performance improvements led by the proposed joint optimization. We conduct our clustering experiments on four popular real datasets |
| Researcher Affiliation | Academia | University of Illinois at Urbana-Champaign, Urbana, IL, USA University of Macau, Macau, China |
| Pseudocode | Yes | Algorithm 1 Stochastic gradient descent algorithm for solving (2), with C(A, w) as defined in (4) Algorithm 2 Stochastic gradient descent algorithm for solving (2), with C(A, w) as defined in (9) |
| Open Source Code | No | The paper mentions using "the publicly available CPMMC code for two-class clustering [Zhao et al., 2008]" but does not provide access or state the release of the authors' own implementation code for the methodology described in this paper. |
| Open Datasets | Yes | We conduct our clustering experiments on four popular real datasets, which are summarized in Table 1. Name Number of Images Class Dimension ORL 400 10 1,024 MNIST 70,000 10 784 COIL20 1,440 20 1,024 CMU-PIE 41,368 68 1,024 |
| Dataset Splits | No | The paper does not provide explicit training/validation/test dataset splits with percentages, sample counts, or citations to predefined splits. It mentions "10 test runs are conducted on different randomly chosen clusters" for a specific experiment, but this is not a general train/validation/test split. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as GPU/CPU models, memory, or cloud computing specifications. |
| Software Dependencies | No | The paper mentions algorithms and external code (e.g., "K-SVD algorithm", "CPMMC code"), but it does not specify any software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, specific library versions). |
| Experiment Setup | Yes | All images are first reshaped into vectors, and PCA is then applied to reducing the data dimensionality by keeping 98% information. For all algorithms that involve graph-regularized sparse coding, the graph regularization parameter α is fixed to be 1, and the dictionary size p is 128 by default. For joint EMC and joint MMC, we set ITER as 30, ρ as 0.9, and t0 as 5. |