Target alignment in truncated kernel ridge regression
Authors: Arash Amini, Richard Baumgartner, Dai Feng
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide experiments verifying the multiple-descent and non-monotonic behavior of the regularization curves as well as the improved rate of Theorem 2 (Section 4.2). We present various simulation results to demonstrate the multiple-descent and phase transition behavior of the regularization curves, and corroborate the theoretical results. |
| Researcher Affiliation | Collaboration | Arash A. Amini1, Richard Baumgartner2, Dai Feng3 1University of California, Los Angeles 2Merck & Co., Inc., Rahway, New Jersey, USA 3Data and Statistical Sciences, Abb Vie Inc. |
| Pseudocode | No | The paper contains mathematical derivations and proofs, but no structured pseudocode or algorithm blocks were found. |
| Open Source Code | Yes | The code for reproducing the simulations is available at [3]. |
| Open Datasets | No | The paper describes generating synthetic data for simulations ("200 samples generated from a uniform distribution on [0, 1]d") rather than using a publicly available dataset with specific access information. |
| Dataset Splits | No | The paper describes generating synthetic data for simulations but does not specify any explicit training, validation, or test dataset splits. |
| Hardware Specification | No | Our simulation were done on a regular laptop. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks) were mentioned in the paper. |
| Experiment Setup | No | The paper describes parameters for its simulations such as "Gaussian kernel e x x /2h2 in d = 4 dimensions with bandwidth h = p d/2" and fixing λ for regularization curves. It also notes how random entries for ξ are generated. However, it does not explicitly list hyperparameters or system-level training settings in the typical sense for a machine learning model. |