Sub-sampled Newton Methods with Non-uniform Sampling

Authors: Peng Xu, Jiyan Yang, Fred Roosta, Christopher Ré, Michael W. Mahoney

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically demonstrate that our methods are at least twice as fast as Newton s methods on several real datasets. and 4 Numerical Experiments We consider an estimation problem in GLMs with Gaussian prior... We compare the performance of the following five algorithms... Next, we compare the performance of various methods as measured by relative-error of the solution vs. running time and the results are shown in Figure 2.
Researcher Affiliation Academia Peng Xu Jiyan Yang Farbod Roosta-Khorasani Christopher Ré Michael W. Mahoney Stanford University University of California at Berkeley
Pseudocode Yes Algorithm 1 Sub-sampled Newton method with Non-uniform Sampling
Open Source Code No The paper does not provide any specific links to source code or an explicit statement about its availability.
Open Datasets Yes Table 3 summarizes the datasets used in ridge logistic regression. DATASET CT slices[9] Forest[2] Adult[13] Buzz[11]
Dataset Splits No The paper does not explicitly provide training/test/validation dataset splits (percentages, counts, or predefined splits).
Hardware Specification No The paper does not explicitly describe the specific hardware used (e.g., GPU/CPU models, memory) for running the experiments.
Software Dependencies No The paper mentions 'CG' as a solver and 'L-BFGS method' but does not provide specific software names with version numbers for reproducibility.
Experiment Setup Yes All algorithms are initialized with a zero vector. We also use CG to solve the sub-problem approximately to within 10 6 relative residue error. ... ridge penalty parameter λ = 0.01.