Generalization of ERM in Stochastic Convex Optimization: The Dimension Strikes Back
Authors: Vitaly Feldman
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this work we substantially strengthen the lower bound in [18] proving that a linear dependence on the dimension d is necessary for ERM (and, consequently, uniform convergence). We then extend the lower bound to all ℓp/ℓq setups and examine several related questions. Finally, we examine a more general setting of bounded-range SCO (that is |f(x)| 1 for all x K). |
| Researcher Affiliation | Industry | Vitaly Feldman IBM Research Almaden |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper is theoretical and does not mention releasing source code for its methodology. |
| Open Datasets | No | This is a theoretical paper and does not involve training on datasets. |
| Dataset Splits | No | This is a theoretical paper and does not involve dataset splits for validation. |
| Hardware Specification | No | The paper does not describe experiments and therefore provides no hardware specifications. |
| Software Dependencies | No | The paper does not describe experiments and therefore provides no software dependencies. |
| Experiment Setup | No | The paper does not describe experiments and therefore provides no experimental setup details. |