Multi-Class Learning: From Theory to Algorithm

Authors: Jian Li, Yong Liu, Rong Yin, Hua Zhang, Lizhong Ding, Weiping Wang

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that our proposed methods can significantly outperform the existing multi-class classification methods.
Researcher Affiliation Collaboration Jian Li1,2, Yong Liu1 , Rong Yin1,2, Hua Zhang1, Lizhong Ding5, Weiping Wang1,3,4 1Institute of Information Engineering, Chinese Academy of Sciences 2School of Cyber Security, University of Chinese Academy of Sciences ... 5Inception Institute of Artificial Intelligence (IIAI), Abu Dhabi, UAE
Pseudocode Yes The paper includes "Algorithm 1 Conv-MKL" and "Algorithm 2 SMSD-MKL".
Open Source Code No The paper states "We implement our proposed Conv-MKL and SMSD-MKL algorithms based on UFO-MKL." and lists links to third-party libraries used (LIBSVM, DOGMA, SHOGUN), but does not provide a link or explicit statement about releasing the source code for their own proposed methods.
Open Datasets Yes We experiment on 14 publicly available datasets: four of them evaluated in [38] (plant, nonpl, psort Pos, and psort Neg) and others from LIBSVM Data.
Dataset Splits Yes The regularization parameterized α 2i, i = 2, . . . , 12 in all algorithms and ζ 2i, i = 1, 2, . . . , 4, β 10i, i = 4, . . . , 1 in SMSD-MKL are determined by 10-folds crossvalidation on training data. For each dataset, we run all methods 50 times with randomly selected 80% for training and 20% for testing
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running the experiments.
Software Dependencies Yes We complete comparison tests via implements in LIBSVM (One-against-One and One-against-the-Rest), the DOGMA library 2 (LMC, GMNP, ℓ1norm and ℓ2-norm MC-MKL) and the SHOGUN-6.1.3 (UFO-MKL).
Experiment Setup Yes For each dataset, we use the Gaussian kernel K(x, x ) = exp x x 2 2/2τ as our basic kernels, where τ 2i, i = 10, 9, . . . , 9, 10. The regularization parameterized α 2i, i = 2, . . . , 12 in all algorithms and ζ 2i, i = 1, 2, . . . , 4, β 10i, i = 4, . . . , 1 in SMSD-MKL are determined by 10-folds crossvalidation on training data.