Self-Representative Manifold Concept Factorization with Adaptive Neighbors for Clustering

Authors: Sihan Ma, Lefei Zhang, Wenbin Hu, Yipeng Zhang, Jia Wu, Xuelong Li

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experimental Results In this section, we evaluate the performance of our proposed method on some datasets to show the effectiveness of our algorithm. We compare our method with some exist algorithms including the K-means, CF [Xu and Gong, 2004], NMF [Lee and Seung, 2001], SMCE [Elhamifar and Vidal, 2011], SSC [Elhamifar and Vidal, 2013], and Normalized Cuts (Ncut) [Shi and Malik, 2000].
Researcher Affiliation Academia 1School of Computer, Wuhan University 2Department of Computing, Macquarie University 3Center for OPTIMAL, Xi an Institute of Optics and Precision Mechanics, CAS
Pseudocode No The paper describes the optimization algorithm using mathematical derivations and textual explanations, but it does not include structured pseudocode or an algorithm block.
Open Source Code No The paper does not provide any explicit statement or link regarding the availability of its source code.
Open Datasets Yes There are in total six datasets used in our experiments, all from the UCI Machine Learning Repository. Table 2 summarizes the characteristics of the datasets used in our experiments.
Dataset Splits No The paper describes parameter searching and reporting the best results, which implies a validation process, but it does not provide explicit details about training/validation/test dataset splits or mention a specific validation set for data partitioning.
Hardware Specification No The paper does not provide specific details about the hardware used for running its experiments.
Software Dependencies No The paper mentions various algorithms and datasets but does not provide specific software names with version numbers (e.g., libraries, frameworks, or operating systems) used in the experiments.
Experiment Setup Yes Parameters Setting. To compare these methods fairly, we run them with some selected parameter combinations and report the best result for comparison. For K-means, NMF, CF and Ncut, we run them for 10 times and calculate both of the mean and standard deviation. ... During the experiments, we set the cluster number and dimension of reduced data representation equal to the number of ground truth classes for all datasets and methods. ... For our method, the regularization parameters λ1 and λ2 are set by searching from {10 5, 10 4, , 104, 105}. ... We have also initialized the W and V by PCAN [Nie et al., 2014].