Neural Collaborative Subspace Clustering

Authors: Tong Zhang, Pan Ji, Mehrtash Harandi, Wenbing Huang, Hongdong Li

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We thoroughly assess and contrast the performance of our model against various state-of-the-art clustering algorithms including deep subspace-based ones.Our empirical study shows the superiority of the proposed algo-rithm over several state-of-the-art baselines including deep subspace clustering techniques.
Researcher Affiliation Collaboration 1Motovis Australia Pty Ltd 2Australian National University 3NEC Labs America 4Monash University 5Tencent AI Lab.
Pseudocode Yes Algorithm 1 Neural Collaborative Subspace Clustering
Open Source Code No The paper does not provide any explicit statement or link regarding the availability of open-source code for the described methodology.
Open Datasets Yes We evaluate our algorithm on three datasets , namely MNIST (Le Cun et al., 1998), Fashion-MNIST (Xiao et al., 2017), and the Stanford Online Products dataset (Oh Song et al., 2016)
Dataset Splits No The paper mentions training epochs but does not specify a separate validation dataset split with percentages or counts.
Hardware Specification Yes We implemented our framework with Tensorflow-1.6 (Abadi et al., 2016) on an Nvidia TITAN X GPU.
Software Dependencies Yes We implemented our framework with Tensorflow-1.6 (Abadi et al., 2016)
Experiment Setup Yes For all the experiments, we pre-train the convolutional auto-encoder for 60 epochs with a learning rate 1.0 10 3.We keep the λ1 = 10 in all the experiments, and slightly change the l and u for each dataset.We set the batch size to 5000, and used Adam (Kingma & Ba, 2014), an adaptive momentum based gradient descent method to minimize the loss in all our experiments.We set the learning rate to 1.0 10 5 for the auto-encoder and 1.0 10 3 for other parts of the network in all training stages.