Implicit Regularization of Random Feature Models

Authors: Arthur Jacot, Berfin Simsek, Francesco Spadaro, Clement Hongler, Franck Gabriel

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we empirically find an extremely good agreement between the test errors of the average λ-RF predictor and λ-KRR predictor.
Researcher Affiliation Academia 1Chair of Statistical Field Theory, Ecole Polytechnique F ed erale de Lausanne, Lausanne, Switzerland 2Laboratory of Computational Neuroscience, Ecole Polytechnique F ed erale de Lausanne, Lausanne, Switzerland.
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about open-sourcing code or links to repositories for the described methodology.
Open Datasets Yes We train the RF predictors on N = 100 MNIST data points where K is the RBF kernel, i.e. K(x, x ) = exp x x 2/ℓ .
Dataset Splits No The paper mentions using N=100 MNIST data points for training and 100 random test points, but it does not specify explicit training/validation/test splits, percentages, or a cross-validation setup.
Hardware Specification No The paper does not provide any specific details about the hardware used for the experiments (e.g., GPU models, CPU types, or cloud instances).
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., programming languages, libraries, or solvers).
Experiment Setup No The paper mentions using N=100 MNIST data points and the RBF kernel, as well as ranges for lambda in figures, but it lacks comprehensive details on the experimental setup such as specific hyperparameter values (e.g., learning rate, batch size, optimizer), model initialization, or training schedules.