Infinite Kernel Learning: Generalization Bounds and Algorithms

Authors: Yong Liu, Shizhong Liao, Hailun Lin, Yinliang Yue, Weiping Wang

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we will empirically compare our PEPbased finite kernel learning (FKL) and infinite kernel learning (IFKL) with 9 popular finite and infinite kernel learning methods: centered-alignment based MKL with linear combination (CABMKL (linear)) and conic combination (CABMKL (conic)) (Cortes, Mohri, and Rostamizadeh 2010), Simple MKL (Rakotomamonjy et al. 2008), generalized MKL algorithm (GMKL) (Varma and Babu 2009), nonlinear MKL algorithm with L1-norm (NLMKL (p = 1)) and L2-norm (NLMKL (p = 2)) (Cortes, Mohri, and Rostamizadeh 2009), group Lasso-based MKL algorithms with L1-norm (GLMKL (p = 1)) and L2-norm (GLMKL (p = 2)) (Kloft et al. 2011), and the state-of-the-art infinite kernel learning (IKL) (Gehler and Nowozin 2008). The data sets are 10 publicly available data sets from LIBSVM Data seen in Table 1.
Researcher Affiliation Academia Yong Liu,1 Shizhong Liao,2 Hailun Lin,1 Yinliang Yue,1 Weiping Wang1 1Institute of Information Engineering, CAS 2School of Computer Science and Technology, Tianjin University
Pseudocode Yes Algorithm 1 Finite Kernel Learning (FKL), Algorithm 2 Infinite Kernel Learning (IFKL), Algorithm 3 Sub-Problem
Open Source Code No The paper does not provide any concrete access to source code for the methodology described.
Open Datasets Yes The data sets are 10 publicly available data sets from LIBSVM Data seen in Table 1.
Dataset Splits Yes For each data set, we run all methods 30 times with randomly selected 50% of all data for training and the other 50% for testing. The parameters λ {2i, i = 5, . . . , 5} and t {2i, i = 1, . . . , 4} of our algorithms are determined by 3-fold cross-validation on training set.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, processor types) used for running experiments were mentioned in the paper.
Software Dependencies No The paper mentions using 'an SVM solver' but does not specify any software names with version numbers.
Experiment Setup Yes For finite kernels, we use the Gaussian kernel K(x, x ) = exp τ x x 2 2 and polynomial kernel K(x, x ) = (1 + x Tx )d as our basic kernels, τ {2i, i = 10, 9 . . . , 10} and d {1, 2, . . . , 20}. For infinite kernels. we use Gaussian kernel and polynomial kernel with continuously parameterized sets, τ [2 10, 210] and d [1, 20]. The regularization parameter C of all algorithms is set to be 1. The other parameters for the compared algorithms follow the same experimental setting in their papers. The parameters λ {2i, i = 5, . . . , 5} and t {2i, i = 1, . . . , 4} of our algorithms are determined by 3-fold cross-validation on training set.