Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Early Stopping and Non-parametric Regression: An Optimal Data-dependent Stopping Rule
Authors: Garvesh Raskutti, Martin J. Wainwright, Bin Yu
JMLR 2014 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show through simulation that our stopping rule compares favorably to two other stopping rules, one based on hold-out data and the other based on Stein s unbiased risk estimate. We also establish a tight connection between our early stopping strategy and the solution path of a kernel ridge regression estimator. (...) We complement these theoretical results with simulation studies that compare its performance to other rules... |
| Researcher Affiliation | Academia | Garvesh Raskutti EMAIL Department of Statistics University of Wisconsin-Madison Madison, WI 53706-1799, USA Martin J. Wainwright EMAIL Bin Yu EMAIL Department of Statistics University of California Berkeley, CA 94720-1776, USA |
| Pseudocode | No | The paper describes the gradient descent update in equation (3) and other mathematical formulations, but it does not contain a clearly labeled pseudocode or algorithm block. |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that source code for the described methodology is publicly available or released. |
| Open Datasets | No | The paper describes generating synthetic data for its simulations: "we formed n = 100 i.i.d. observations of the form y = f (xi) + wi" and "we generated samples from the observation model yi = f (xi) + wi". It uses fixed designs (xi = i/n) or random designs (xi ~ Unif(0,1)). No external, named public datasets are used or referenced. |
| Dataset Splits | Yes | In this section, we provide a comparison of our stopping rule to two other stopping rules, as well as a oracle method that involves knowledge of f , and so cannot be computed in practice. (...) We begin by comparing to a simple hold-out method that performs gradient descent using 50% of the data, and uses the other 50% of the data to estimate the risk. In more detail, assuming that the sample size is even for simplicity, we split the full data set {xi}n i=1 into two equally sized subsets Str and Ste. The data indexed by the training set Str is used to estimate the function ftr,t using the gradient descent update (3). At each iteration t = 0, 1, 2, . . ., the data indexed by Ste is used to estimate the risk via RHO(ft) = 1 n P i Ste yi ftr,t(xi) 2 |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments or simulations. |
| Software Dependencies | No | The paper does not specify any particular software libraries, packages, or their version numbers used for implementing the described methods or running simulations. |
| Experiment Setup | Yes | In particular, we formed n = 100 i.i.d. observations of the form y = f (xi) + wi, where wi N(0, 1), and using the fixed design xi = i/n for i = 1, . . . , n. We then implemented the gradient descent update (3) with initialization θ0 = 0 and constant step sizes αt = 0.25. (...) For all our experiments, the noise variance σ2 was set to one, but so as to have a data-dependent method, this knowledge was not provided to the estimator. (...) For a range of sample sizes n between 10 and 300, we performed the updates (3) with constant stepsize α = 0.25, stopping at the specified time b T. (...) In each case, we applied the gradient update (3) with constant stepsizes αt = 1 for all t. |