Multiple Kernel Clustering with Local Kernel Alignment Maximization

Authors: Miaomiao Li, Xinwang Liu, Lei Wang, Yong Dou, Jianping Yin, En Zhu

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental As experimentally demonstrated on six challenging multiple kernel learning benchmark data sets, our algorithm significantly outperforms the state-of-the-art comparable methods in the recent literature, verifying the effectiveness and superiority of maximizing local kernel alignment. Extensive experimental study has been conducted on six MKL benchmark data sets to evaluate clustering performance of the proposed algorithm.
Researcher Affiliation Academia Miaomiao Li, Xinwang Liu School of Computer National University of Defense Technology Changsha, China, 410073, Lei Wang School of Computer Science and Software Engineering University of Wollongong NSW, Australia, 2522, Yong Dou, Jianping Yin, En Zhu School of Computer National University of Defense Technology Changsha, China, 410073
Pseudocode Yes Algorithm 1 Multiple Kernel Clustering with Local Kernel Alignment Maximization
Open Source Code No The paper provides links to the code for compared algorithms (e.g., 'The Matlab codes of KKM, MKKM and LMKKM are publicly available at https://github.com/mehmetgonen/lmkkmeans.') but does not state that the code for the proposed method is open-source or provide a link for it.
Open Datasets Yes The proposed algorithm is experimentally evaluated on six widely used MKL benchmark data sets shown in Table 1. They are UCI-Digital1, Oxford Flower172, Protein fold prediction3, YALE4, Oxford Flower1025 and Caltech1026. URLs are provided in footnotes: 1http://ss.sysu.edu.cn/ py/ 2http://www.robots.ox.ac.uk/ vgg/data/flowers/17/ 3http://mkl.ucsd.edu/dataset/protein-fold-prediction 4http://vismod.media.mit.edu/vismod/classes/mas62200/datasets/ 5http://www.robots.ox.ac.uk/ vgg/data/flowers/102/ 6http://mkl.ucsd.edu/dataset/ucsd-mit-caltech-101-mkl-dataset. For the rest of the other data sets, all kernel matrices are pre-computed and publicly downloaded from the above websites.
Dataset Splits No The paper mentions repeating experiments for 50 times with random initialization and reporting the best result, but it does not specify explicit train/validation/test dataset splits or cross-validation for the model training process.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments.
Software Dependencies No The paper mentions 'Matlab codes' for compared algorithms but does not provide specific software dependencies with version numbers for its own implementation or experiments.
Experiment Setup Yes In all our experiments, all base kernels are first centered and then scaled so that for all i and p we have Kp(xi, xi) = 1 by following [Cortes et al., 2012; 2013]. For all data sets, it is assumed that the true number of clusters is known and set as the true number of classes. For the proposed algorithm, its regularization parameters λ and are chosen from [2^−15, 2^−13, ..., 2^15] and [0.05, 0.1, ..., 0.95] n by grid search, where n is the number of samples.