On the Saturation Effects of Spectral Algorithms in Large Dimensions

Authors: Weihao Lu, haobo Zhang, Yicheng Li, Qian Lin

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental A.2 Numerical experiments We conducted two experiments using two specific kernels: the RBF kernel and the NTK kernel. Experiment 1 was designed to confirm the optimal rate of kernel gradient flow and KRR when s = 1. Experiment 2 was designed to illustrate the saturation effect of KRR when s > 1.
Researcher Affiliation Academia Weihao Lu Department of Statistics and Data Science Tsinghua University Beijing, China 100084 luwh19@mails.tsinghua.edu.cn Haobo Zhang Department of Statistics and Data Science Tsinghua University Beijing, China 100084 zhang-hb21@mails.tsinghua.edu.cn Yicheng Li Department of Statistics and Data Science Tsinghua University Beijing, China 100084 liyc22@mails.tsinghua.edu.cn Qian Lin Department of Statistics and Data Science Tsinghua University Beijing, China 100084 qianlin@tsinghua.edu.cn
Pseudocode No No structured pseudocode or algorithm blocks were found.
Open Source Code No The NeurIPS checklist states 'The paper does not include experiments requiring code.'
Open Datasets No We used the following data generation procedure: yi = f (xi) + ϵi, i = 1, . . . , n, where each xi is i.i.d. sampled from the uniform distribution on Sd, and ϵi i.i.d. N(0, 1). This indicates synthetic data generation, not a publicly accessible dataset.
Dataset Splits Yes We use 5-fold cross-validation to select the regularization parameter λ in kernel ridge regression. The alternative values of λ in cross-validation are C2n C3, where C2 {0.001, 0.005, 0.01, 0.1, 0.5, 1, 2, 5, 10, 40, 100, 300, 1000}, C3 {0.1, 0.2, . . . , 1.5}.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) were provided for running the experiments.
Software Dependencies No No specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) were explicitly mentioned.
Experiment Setup Yes We choose the stopping time t in kernel gradient flow as C1n0.5, where C1 {0.001, 0.01, 0.1, 1, 10, 100, 1000}. We use 5-fold cross-validation to select the regularization parameter λ in kernel ridge regression. The alternative values of λ in cross-validation are C2n C3, where C2 {0.001, 0.005, 0.01, 0.1, 0.5, 1, 2, 5, 10, 40, 100, 300, 1000}, C3 {0.1, 0.2, . . . , 1.5}.