Sample-adaptive Multiple Kernel Learning

Authors: Xinwang Liu, Lei Wang, Jian Zhang, Jianping Yin

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental As demonstrated on five benchmark data sets, the proposed algorithm consistently outperforms the comparable ones in the literature.
Researcher Affiliation Academia Xinwang Liu School of Computer National University of Defense Technology Changsha, China, 410073 Lei Wang School of Computer Science and Software Engineering University of Wollongong NSW, Australia, 2522 Jian Zhang Faculty of Engineering and Information Technology University of Technology Sydney NSW, Australia, 2007 Jianping Yin School of Computer National University of Defense Technology Changsha, China, 410073
Pseudocode Yes Algorithm 1 Proposed Sample-adaptive MKL Algorithm
Open Source Code No The paper does not provide any explicit statement about making its own source code available or links to a repository for its methodology.
Open Datasets Yes We compare the proposed SAMKL with LMKL (G onen and Alpaydin 2008) on the protein fold prediction data set http://mkl.ucsd.edu/dataset/. ... Four benchmark data sets are used, including psort Pos, psort Neg, plant and Caltech-101 data sets. All of them can be downloaded from http://mkl.ucsd.edu/dataset/.
Dataset Splits Yes the regularization parameter C for all the three MKL algorithms is chosen from [10 1, 100, , 104] by five-fold cross-validation on training data sets. ... For the psort Pos, psort Neg and plant data sets, we randomly split the data into 20 groups, with 50% : 50% for training and test. ... For Caltech-101, we use the five pre-defined training and test partitions.
Hardware Specification No The paper discusses computational time and efficiency but does not provide any specific hardware details such as CPU, GPU models, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions using 'existing MKL packages' and 'off-the-shelf packages such as MOSEK', but it does not specify any version numbers for these or any other software dependencies.
Experiment Setup Yes For the proposed SAMKL, we empirically set h0 = (1, 1, , 1) and m0 = 3. ... the regularization parameter C for all the three MKL algorithms is chosen from [10 1, 100, , 104] by five-fold cross-validation on training data sets. ... Each base kernel matrix is normalized to have a unit trace. ... For our proposed SAMKL, h0 is again set as (1, 1, , 1) and m0 is empirically set as 20 and 10 on three protein data sets and Caltech-101, respectively. ... For the Caltech-101, C is set to 104 experimentally.