Clustering Ensemble Meets Low-rank Tensor Approximation
Authors: Yuheng Jia, Hui Liu, Junhui Hou, Qingfu Zhang7970-7978
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results over 7 benchmark data sets show that the proposed model achieves a breakthrough in clustering performance, compared with 11 state-of-the-art methods. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Southeast University, Nanjing 210096, China 2Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong SAR |
| Pseudocode | Yes | Algorithm 1 t-SVD of a 3-D tensor (Zhang et al. 2014) and Algorithm 2 Numerical solution to Eq. (9) are provided. |
| Open Source Code | Yes | To reproduce the results, we made the code publicly available at https://github.com/jyhlearning/Tensor Clustering Ensemble. |
| Open Datasets | Yes | Following recent clustering ensemble papers (Huang, Wang, and Lai 2018; Huang, Lai, and Wang 2016; Zhou, Zheng, and Pan 2019), we adopted 7 commonly used data sets, i.e., Bin Alpha, Multiple features (MF), MNIST, Semeion, Cal Tech, Texture and ISOLET. |
| Dataset Splits | No | The paper does not explicitly provide details about train/validation/test dataset splits. It mentions randomly selecting samples and base clusterings for repetitions, but not data partitioning for validation purposes. |
| Hardware Specification | No | The paper does not specify the hardware (e.g., CPU, GPU models, memory) used for the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies used in the experiments (e.g., programming languages, libraries, frameworks). |
| Experiment Setup | Yes | For the compared methods, we set the hyper-parameters according to their original papers. If there are no suggested values, we exhaustively searched the hyper-parameters, and used the ones producing the best performance. The proposed model only contains one hyperparameter λ, which was set to 0.002 for all the data sets. |