Scalable Learning in Reproducing Kernel Krein Spaces

Authors: Dino Oglic, Thomas Gärtner

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The effectiveness of the devised approaches is evaluated empirically using indefinite kernels defined on structured and vectorial data representations. In this section, we report the results of experiments aimed at demonstrating the effectiveness of: i) the Nyström method in low-rank approximations of indefinite kernel matrices, and ii) the described scalable Kre ın approaches in classification tasks with pairwise (dis)similarity matrices.
Researcher Affiliation Academia Dino Oglic 1 Thomas Gärtner 2 1Department of Informatics, King s College London, UK 2School of Computer Science, University of Nottingham, UK.
Pseudocode No The paper describes methods using mathematical derivations and textual explanations but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement about releasing code or links to a code repository for the described methodology.
Open Datasets Yes In the first set of experiments, we take several datasets from UCI and LIACC repositories and define kernel matrices on them using the same indefinite kernels as previous work (Oglic and Gärtner, 2018, Appendix D). In the second set of experiments, we evaluate the effectiveness of the proposed least square methods and the support vector machine on classification tasks1 with pairwise dissimilarity matrices (Pekalska and Duin, 2005; Duin and Pekalska, 2009). Following the instructions in Pekalska and Haasdonk (2009), the dissimilarity matrices are converted to similarities by applying the transformation characteristic to multi-dimensional scaling (e.g., see the negative double-centering transformation in Cox and Cox, 2000). (Footnote 1: http://prtools.org/disdatasets/index.html)
Dataset Splits Yes In each simulation, we perform 10-fold stratified cross-validation and measure the effectiveness of an approach with the average/median percentage of misclassified examples.
Hardware Specification No The paper acknowledges 'University of Nottingham High Performance Computing Facility' but does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers.
Experiment Setup Yes Prior to computation of kernel matrices, all the data matrices were normalized to have mean zero and unit variance across features. In each simulation, we perform 10-fold stratified cross-validation and measure the effectiveness of an approach with the average/median percentage of misclassified examples. ...hyperparameters λ R+... To automatically tune the hyperparameters, one can follow the procedure described in Chapelle et al. (2002) and use implicit derivation to compute the gradient of the optimal solution with respect to the hyperparameters.