One-Step Spectral Clustering via Dynamically Learning Affinity Matrix and Subspace

Authors: Xiaofeng Zhu, Wei He, Yonggang Li, Yang Yang, Shichao Zhang, Rongyao Hu, Yonghua Zhu

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on both synthetic and benchmark datasets verified that our proposed method outputted more effective clustering result than the previous clustering methods.
Researcher Affiliation Academia 1Guangxi Key Lab of Multi-source Information Mining & Security, Guangxi Normal University, Guilin, 541004, China 2University of Electronic Science and Technology of China, Chengdu, 611731, China 3Guangxi University, Nanning, 530004, China
Pseudocode No The paper describes the optimization steps using equations and textual explanations, but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code No The paper does not provide any statement regarding the release of source code or a link to a code repository.
Open Datasets Yes The used datasets (shown in Table 1 for more detail) include image datasets (such as Umist, Ecoli, Yale B, Coil and Jaffe) and the datasets Wine is downloaded from (Zhong and Fukushima 2007). Umist (Graham and Allinson 1995) consists of 575 face images of 20 people.
Dataset Splits No The paper does not specify explicit train/validation/test splits with percentages, sample counts, or references to predefined splits. It mentions running k-means 10 times and tuning parameters, but lacks detailed splitting methodology.
Hardware Specification No The paper does not provide any specific hardware details such as CPU/GPU models, memory, or cloud computing specifications used for experiments.
Software Dependencies No The paper does not specify any software libraries or their version numbers (e.g., Python, PyTorch, scikit-learn versions) used for implementation or experiments.
Experiment Setup Yes For fair comparison, we used the self-tune Gaussian method to construct the initial affinity matrix and set the number of k (in a range of {5, 10, 15} ) by following the setting of CLR for the methods (i.e., SSC, LRR, RCut, and NCut); We repeated the experiments 100 times for the methods (i.e., k-means, RCut, and NCut) and reported the average performance of the k-means to eliminate the random error; We tuned all the parameters in a range of {0.01, 1, 10, 100} to report the best performance for the spectral clustering methods (i.e., SSC, LRR, CLR, and our method).