Multi-View Randomized Kernel Classification via Nonconvex Optimization

Authors: Xiaojian Ding, Fan Yang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on the real-world datasets demonstrate the superiority of the proposed method.
Researcher Affiliation Academia College of Information Engineering, Nanjing University of Finance and Economics, Nanjing 210023, China
Pseudocode Yes Algorithm 1: SDP based Branch-and-Bound algorithm
Open Source Code No The paper does not provide any explicit statement about releasing its source code or a link to a code repository.
Open Datasets Yes We conduct experiments on eight benchmark datasets to evaluate the performance of our proposed RMKL algorithm. Among these datasets, four are public multi-view datasets, and the other is gene expression microarray one-view datasets publicly available at the Schliep LAB website. Their characteristics are summarized in Table 1. ... 1https://schlieplab.org/Static/Supplements/Comp Cancer/datasets.htm 2http://mlg.ucd.ie/datasets/3sources.html 3http://research.microsoft.com/enus/projects/objectclassrecognition/ 4http://www.vision.caltech.edu/Image Datasets/Caltech101/ 5https://lms.comp.nus.edu.sg/wp-content/
Dataset Splits Yes For each run, half of the samples (as shown in column 3 of Table 1) are randomly chosen as a training set, and the remaining samples are retained as a test set. ... We train a classifier with the basis kernels on the validation set and calculate their average k-fold cross-validation error rates to measure the basis kernels performance.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments.
Software Dependencies No The paper mentions software like SVM, SVMWB, LSSVM, LSKELM as base learners but does not specify their versions or the versions of any other software libraries used (e.g., Python, PyTorch, TensorFlow, scikit-learn versions).
Experiment Setup Yes In the proposed RMKL, two parameters need to be set. They are the number of randomized kernels M and the number of selected kernels m. For multi-view datasets, the m value is set as the number of views in the datasets. The M value is generally 2 to 3 times the m value. This conclusion is based on a large number of experimental results. In the following experiments, we set M = 10 and m = 5 as default for one-view datasets. Like other MKL algorithms, we employ SVM as the base learner in the proposed RMKL. In RMKL, GMKL, TSMKL, and ELMKL, the regularization parameter C is tuned on 10^2, 0.1, 1, 10, 10^2, 10^3. In ALMKL, the regularization parameter λ is experimentally set to 10^-3. In EAMKL, the regularization parameter λ is tuned on {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}. In PMKL, parameters pairs (p, C) are tuned on {1, 2, 4, 10} x 10^-2, 0.1, 1, 10, 10^2, 10^3. In RPMKL, parameters pairs (r, p, C) are tuned on {1, 2, 4, 10} x {1, 2, 4, 10} x 10^-2, 0.1, 1, 10, 10^2, 10^3.