Tri-level Robust Clustering Ensemble with Multiple Graph Learning

Authors: Peng Zhou, Liang Du, Yi-Dong Shen, Xuejun Li11125-11133

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on benchmark datasets also demonstrate it. In this section, we conduct extensive experiments to demonstrate the effectiveness of the proposed method.
Researcher Affiliation Academia 1School of Computer Science and Technology, Anhui University 2State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences 3School of Computer and Information Technology, Shanxi University
Pseudocode Yes Algorithm 1: TRCE Algorithm
Open Source Code No The paper does not explicitly provide information about open-source code availability or links to a code repository.
Open Datasets Yes We use 10 datasets, including AR (Wang, Nie, and Huang 2014), Coil20 (Cai et al. 2010), K1b (Zhao and Karypis 2004), Lung (Hong and Yang 1991), Medical (Zhou et al. 2015b), Tr41 (Zhao and Karypis 2004), Tdt2 (Cai et al. 2007), TOX (Li et al. 2018), UMIST (Wechsler et al. 2012), Warp AR (Li et al. 2018). The detailed information of these datasets is shown in Table 1.
Dataset Splits Yes Following the experimental setup in (Zhou et al. 2015b), we run k-means 200 times with different random initialization-s to obtain 200 base results. Then we divide them into 10 subsets, with 20 in each one. We apply clustering ensemble methods on each subset, and report the average results over the 10 subsets.
Hardware Specification No The paper does not provide specific hardware details used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers.
Experiment Setup Yes Our method adjusts γ automatically as introduced in Algorithm 1. The parameter ρ is also automatically decided. We first initialize ρ = 1, and then, if the rank of L is larger than n c, which means the rank constraint is not strong enough, we double it. If its rank is smaller than n c, i.e., the constraint is too strong, we reduce it by half. The only hyper-parameter tuned manually is λ. We tune it in [10 5, 105].