Latent Distribution Preserving Deep Subspace Clustering
Authors: Lei Zhou, Xiao Bai, Dong Wang, Xianglong Liu, Jun Zhou, Edwin Hancock
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on several public databases show that our method achieves significant improvement compared with the state-of-the-art subspace clustering methods. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Beijing Advanced Innovation Center for Big Data and Brain Computing, Jiangxi Research Institute, Beihang University, Beijing, China 2State Key Laboratory of Software Development Environment, Beihang University, Beijing, China 3School of Information and Communication Technology, Griffith University, Nathan, Australia 4Department of Computer Science, University of York, York, U.K. |
| Pseudocode | No | The paper describes the algorithm using textual descriptions and mathematical formulations (equations), but it does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link regarding the availability of its source code. |
| Open Datasets | Yes | We test the proposed method on handwritten digit clustering using the MNIST database [Lecun et al., 1998]... Since subspaces are commonly used to capture the appearance of faces under varying illuminations, we also test the performance of our method on face clustering with the CMU PIE database [Sim et al., 2001]... We further evaluated DPSC on the challenging object clustering task using the COIL-20 and COIL-100 [Nene et al., 1996] databases |
| Dataset Splits | Yes | Each cluster contains 6,000 images for training and 1,000 images for testing, with a size of 28 28 pixels in each image. We randomly selected 1,000 images from each digit for our experiment. We fixed the number of clusters k = 10 and chose different numbers of data points for each cluster. Each cluster contained Ni data points randomly chosen from the corresponding 1,000 images, where Ni {100, 500, 1000}, so that the number of total points N {1000, 5000, 10000}. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for the experiments (e.g., GPU models, CPU types, or memory specifications). |
| Software Dependencies | No | The paper mentions using the 'Adam algorithm' for training and 'NCut algorithm' for spectral clustering. However, it does not specify any software dependencies with version numbers (e.g., Python version, specific deep learning frameworks like TensorFlow or PyTorch, or library versions). |
| Experiment Setup | Yes | We train the whole DPSC network by minimizing the loss function (6) with the Adam algorithm [Kingma and Ba, 2015]. The learning rate is set as 1 10 3 for all experiments... For DPSC, we set the bandwidth h = 2, λ = λ1 = 1 and λ2 = 2. |