Generalization in Kernel Regression Under Realistic Assumptions

Authors: Daniel Barzilai, Ohad Shamir

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This work aims to provide a unified theory to upper bound the excess risk of kernel regression for nearly all common and realistic settings. When applied to common kernels, our results imply benign overfitting in high input dimensions, nearly tempered overfitting in fixed dimensions, and explicit convergence rates for regularized regression.
Researcher Affiliation Academia 1Weizmann Institute of Science. Correspondence to: Daniel Barzilai <daniel.barzilai@weizmann.ac.il>, Ohad Shamir <ohad.shamir@weizmann.ac.il>.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access information (e.g., specific repository link, explicit code release statement) for source code related to the methodology described.
Open Datasets No The paper mentions generating data from distributions like 'inputs uniformly in Sd 1' for its figures, but it does not provide concrete access information (link, DOI, repository, formal citation) for a publicly available or open dataset used in the experiments.
Dataset Splits No The paper mentions 'training points' and 'test samples' in the context of its theoretical framework and figures, but it does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce data partitioning.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup No The paper does not contain specific experimental setup details such as concrete hyperparameter values, training configurations, or system-level settings for the experiments shown in the figures.