Estimating the Error of Randomized Newton Methods: A Bootstrap Approach

Authors: Jessie X.T. Chen, Miles Lopes

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present a collection of experiments that study how well Algorithms 1 and 2 can estimate the errors of NEWTON SKETCH and GIANT in the context of ℓ2-regularized logistic regression.
Researcher Affiliation Academia 1Department of Mathematics, University of California, Davis 2Department of Statistics, University of California, Davis.
Pseudocode Yes Algorithm 1 Error estimation for NEWTON SKETCH
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository.
Open Datasets Yes We used the SUSY regression dataset of size (n = 5,000,000, d = 18), which can be obtained from LIBSVM (Chang & Lin, 2011).
Dataset Splits No The paper uses the SUSY regression dataset but does not explicitly provide train/validation/test splits or mention cross-validation details.
Hardware Specification No The paper does not provide specific details regarding the hardware used for running the experiments.
Software Dependencies No The paper does not list any specific software dependencies with version numbers required to replicate the experiment.
Experiment Setup Yes For all the experiments, the regularization parameter was chosen as γ = 10 3, and the number of bootstrap samples was chosen as B = 12. The step size ηk at each iteration of NEWTON SKETCH and GIANT was determined by the Armijo line search so that f(wk + ηk e k) f(wk) + ηkβ e k, gk . Specifically, the control parameter β was set to β = 0.1, and the search for the step size was restricted to a grid of values ηk {20, 2 1, . . . , 2 10}. with a sketch size of t = n/32. We chose the number of machines to be m = 32 for all datasets