Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Improved Classification Rates for Localized SVMs

Authors: Ingrid Blaschzyk, Ingo Steinwart

JMLR 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We take advantage of this observation to derive global learning rates for localized SVMs with Gaussian kernels and hinge loss. It turns out that our rates outperform under suitable sets of assumptions known classification rates for localized SVMs, for global SVMs, and other learning algorithms based on e.g., plug-in rules or trees. The statistical analysis of the excess risk relies on a simple partitioning based technique, which splits the input space into a subset that is close to the decision boundary and into a subset that is sufficiently far away. A crucial condition to derive then improved global rates is a margin condition that relates the distance to the decision boundary to the amount of noise.
Researcher Affiliation Academia Ingrid Blaschzyk EMAIL Ingo Steinwart EMAIL Institute for Stochastics and Applications University of Stuttgart 70569 Stuttgart, Germany
Pseudocode No The paper describes methodologies verbally and mathematically, presenting theorems and proofs, but does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper is theoretical and focuses on mathematical derivations of learning rates. It does not mention releasing its own source code for the methodology described in the paper. It references a third-party tool 'liquid SVM (Steinwart and Thomann, 2017)' which was used for experimental results in other works, not for the current paper's theoretical contributions.
Open Datasets No The paper is purely theoretical and focuses on deriving learning rates. It does not conduct experiments on any specific datasets. Mentions of 'large-scale datasets' or 'small- and medium-sized datasets' are made in the context of motivating the research problem, not as datasets used or provided by the authors for their own work.
Dataset Splits No The paper is purely theoretical and does not involve empirical evaluation with datasets. Therefore, it does not provide any information regarding dataset splits for training, validation, or testing.
Hardware Specification No The paper is purely theoretical, focusing on mathematical analysis and derivations of learning rates. It does not describe any experimental setup or hardware used for computation.
Software Dependencies No The paper is purely theoretical and does not conduct experiments that would require specific software dependencies with version numbers for its own methodology. While 'liquid SVM' is mentioned, it refers to a tool used in other works, not for the current paper's contributions, and no version is provided.
Experiment Setup No The paper is purely theoretical and does not involve conducting experiments. Therefore, it does not provide details about an experimental setup, hyperparameters, optimizer settings, or other training configurations.