Cascade Subspace Clustering
Authors: Xi Peng, Jiashi Feng, Jiwen Lu, Wei-Yun Yau, Zhang Yi
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results show the effectiveness of our algorithm comparing with 11 state-of-the-art clustering approaches on four data sets regarding to four evaluation metrics. |
| Researcher Affiliation | Collaboration | Xi Peng,1 Jiashi Feng,2 Jiwen Lu,3 Wei-Yun Yau,1 Zhang Yi4 1Institute for Infocomm Research, A*STAR, Singapore; 2National University of Singapore, Singapore 3Department of Automation, Tsinghua University, Beijing, China 4College of Computer Science, Sichuan University, Chengdu, P. R. China. |
| Pseudocode | No | The paper describes the implementation steps in prose but does not provide a formal pseudocode block or algorithm listing. |
| Open Source Code | No | The paper mentions implementing CSC using Theano (Theano Development Team 2016) and Keras (Chollet 2015), which are third-party libraries, but does not provide a link to its own source code for CSC. |
| Open Datasets | Yes | Data Sets: We use four data sets for our experiments, i.e. full mnist data set (Lecun et al. 1998) (mnist-full), the test partition of mnist (mnist-test), the testing subset of cifar10 (Krizhevsky and Hinton 2009), and a subset of reuters (Lewis et al. 2004). |
| Dataset Splits | No | The paper mentions using 'full mnist data set', 'mnist-test' (the test partition of mnist), and 'the testing subset of cifar10', but does not explicitly provide details on how the datasets are split into training, validation, and test sets, or specific percentages for these splits beyond identifying 'mnist-test' and 'cifar10 testing partition' as test sets. |
| Hardware Specification | Yes | The experiments are conducted on a machine with a Titan X GPU and 24x Intel Xeon CPU. |
| Software Dependencies | No | The paper states CSC is implemented in Theano (Theano Development Team 2016) based Keras (Chollet 2015), but it does not provide specific version numbers for these software dependencies (e.g., 'Keras 2.0' or 'Theano 0.9'). |
| Experiment Setup | Yes | We optimize CSC using stochastic sub-gradient descent (SGD) with momentum and weights decay. To initialize CSC, we train an m-500-500-2000-d-2000-500-500-m denoising autoencoder (Vincent et al. 2010) with a corruption ratio of 0.3, a momentum of 0.9, and a weight decay rate of 10 6. Moreover, we adopt rectifier linear units (Re Lu) as the activation function. Table 1 also specifies parameters like learning rate, decay, epochs, and batch size for different datasets (e.g., mnist-full: d=10, lr=10, de=5, ep=0.9, re=300, bs=256). |