Projective Low-rank Subspace Clustering via Learning Deep Encoder

Authors: Jun Li, Liu Hongfu, Handong Zhao, Yun Fu

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments verify that our scheme outperforms the related methods on large scale datasets in a small amount of time. We achieve the state-of-art clustering accuracy by 95.8% on MNIST using scattering convolution features.
Researcher Affiliation Academia Jun Li1, Hongfu Liu1, Handong Zhao1 and Yun Fu1,2 1Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, 02115, USA. 2College of Computer and Information Science, Northeastern University, Boston, MA, 02115, USA.
Pseudocode Yes Algorithm 1 PLD via ADM and gradient descent. and Algorithm 2 PLr SC via PLD are provided in the paper.
Open Source Code No The paper does not provide any explicit statements or links indicating that the source code for the methodology is openly available.
Open Datasets Yes Datasets: We conducted experiments on three large scale datasets shown in Table 1. A brief description of the datasets is listed below. MNIST2 contains 70,000 training and testing examples with 28 28 pixel greyscale images of handwritten digits 0-9. MNIST-SC is a variant of MNIST. We follow the settings [You et al., 2016]... NORB3 contains 48,600 images combined the training samples with test samples. Footnotes 2 and 3 provide URLs to the MNIST and NORB datasets respectively.
Dataset Splits No The paper mentions '70,000 training and testing examples' for MNIST and 'combined the training samples with test samples' for NORB. However, it does not provide specific percentages or counts for training, validation, and test splits needed for reproduction. It mentions 'randomly selected number of samples' but not how these selected samples are then split for train/validation/test.
Hardware Specification No The paper states: 'All experiments were implemented in MATLAB R2015a and run on a Linux machine with 2.7 GHz CPU, 24GB memory.' This provides general specifications but does not specify a particular CPU model or GPU.
Software Dependencies Yes The paper states: 'All experiments were implemented in MATLAB R2015a'.
Experiment Setup Yes The regularization parameter γ and the learning rate η are easy to set γ = 1 and η = 0.001 or 0.0001 in this paper. PLr SC gets the best results when the number of hidden units was 2000 in the three-layer deep encoder in PLD. When λ = 0.1 PLr SC reaches the best ACC and NMI. PLr SC achieves good results when the number of layers is 3 in PLD.