Fusion Multiple Kernel K-means
Authors: Yi Zhang, Xinwang Liu, Jiyuan Liu, Sisi Dai, Changwang Zhang, Kai Xu, En Zhu9109-9117
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experimental results demonstrate that our proposed algorithm achieves stateof-the-art performance on multiple public datasets, validating its effectiveness. |
| Researcher Affiliation | Academia | 1 School of Computer, National University of Defense Technology, Changsha, China, 410073 2 CCF Theoretical Computer Science Technical Committee, Shenzhen, China, 518064 |
| Pseudocode | Yes | Algorithm 1: Solving H with orthogonality constraint via curvilinear search algorithm and Algorithm 2: Fusion Multiple Kernel K-means |
| Open Source Code | Yes | The code of this work is publicly available at https://github.com/ethan-yizhang/Fusion-Multiple-Kernel K-means. |
| Open Datasets | Yes | Multiple public datasets are adopted to evaluate the performance of our proposed FMKKM, including Texas1, Wisconsin1, Football2, BBCSport3, Willow4, Flower175, Flower1025, ALOI-1006, Reuters7. The detail information of datasets is summarized in Table 1. |
| Dataset Splits | No | The paper mentions 'For all algorithms, we repeat each experiment 50 times with random initialization to reduce the randomness effect caused by k-means' but does not provide specific train/validation/test split percentages, sample counts, or references to predefined splits for the datasets used. |
| Hardware Specification | Yes | All experiments are performed on a PC with Intel Core i9-10900X CPU and 64G RAM. |
| Software Dependencies | No | The paper mentions using 'k-means' and solving equations 'by SVD', but it does not specify version numbers for any software libraries, frameworks, or programming languages used for implementation. |
| Experiment Setup | Yes | For all datasets, the true number of clusters k is prespecified and set as the input of algorithms. We repeat each experiment 50 times with random initialization to reduce the randomness effect caused by k-means. Figure 4 presents the ACC of FMKKM on Wisconsin and ALOI-100 datasets by varying λ1 in 2[1:9] and λ2 in 2[3:10], respectively. |