Deep Subspace Clustering Networks

Authors: Pan Ji, Tong Zhang, Hongdong Li, Mathieu Salzmann, Ian Reid

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that our method significantly outperforms the state-of-the-art unsupervised subspace clustering techniques.
Researcher Affiliation Academia Pan Ji University of Adelaide Tong Zhang Australian National University Hongdong Li Australian National University Mathieu Salzmann EPFL CVLab Ian Reid University of Adelaide
Pseudocode No The paper describes the network architecture and training strategy in text and figures, but does not provide a formal pseudocode block or algorithm.
Open Source Code Yes Due to the lack of space, we refer the reader to the publicly available implementation of SSC and Section 5 of [15], as well as to the Tensor Flow implementation of our algorithm 2 for more detail. (Footnote 2: https://github.com/panji1990/Deep-subspace-clustering-networks)
Open Datasets Yes We extensively evaluate our method on face clustering, using the Extended Yale B [21] and ORL [39] datasets, and on general object clustering, using COIL20 [31] and COIL100 [30].
Dataset Splits No The paper describes pre-training and fine-tuning using all available data (e.g., 'we build a big batch using all the data to minimize the loss L(Θ)') but does not explicitly mention or specify a separate validation set for hyperparameter tuning or early stopping.
Hardware Specification No The paper does not specify any particular hardware components (e.g., CPU, GPU models, memory, or cloud instance types) used for running the experiments.
Software Dependencies Yes We implemented our method in Python with Tensorflow-1.0 [1],
Experiment Setup Yes Specifically, we use Adam [18], an adaptive momentum based gradient descent method, to minimize the loss, where we set the learning rate to 1.0 10 3 in all our experiments. We set the regularization parameters to λ1 = 1.0, λ2 = 1.0 10 K 10 3. In the fine-tuning stage, we ran 30 epochs (COIL20) / 100 epochs (COIL100) for DSC-Net-L1 and 30 epochs (COIL20) / 120 epochs (COIL100) for DSC-Net-L2, and set the regularization parameters to λ1 = 1, λ2 = 150/30 (COIL20/COIL100).