On the Variance, Admissibility, and Stability of Empirical Risk Minimization

Authors: Gil Kur, Eli Putterman, Alexander Rakhlin

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we prove that, under relatively mild assumptions, the suboptimality of ERM must be due to its bias. Namely, the variance error term of ERM (in terms of the bias and variance decomposition) enjoys the minimax rate.
Researcher Affiliation Academia Gil Kur EECS MIT gilkur@mit.edu Eli Putterman Mathematics Department Tel Aviv University putterman@mail.tau.ac.il Alexandrer Rakhlin BCS & IDSS MIT rakhlin@mit.edu
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any statement or link regarding the availability of open-source code for the methodology described.
Open Datasets No The paper is theoretical and focuses on mathematical proofs and characterizations. It does not refer to specific datasets used for training, validation, or testing.
Dataset Splits No The paper is theoretical and focuses on mathematical proofs and characterizations. It does not provide details on training/test/validation dataset splits.
Hardware Specification No The paper does not mention any specific hardware used for running experiments. This is consistent with its theoretical nature.
Software Dependencies No The paper does not provide details about specific software dependencies with version numbers. This is consistent with its theoretical nature.
Experiment Setup No The paper is theoretical and does not describe any experimental setup details such as hyperparameters or system-level training settings.