Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Stochastic Block BFGS: Squeezing More Curvature out of Data

Authors: Robert Gower, Donald Goldfarb, Peter Richtarik

ICML 2016 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical tests on large-scale logistic regression problems reveal that our method is more robust and substantially outperforms current state-of-the-art methods.
Researcher Affiliation Academia Robert M. Gower EMAIL Donald Goldfarb EMAIL Peter Richt arik EMAIL
Pseudocode Yes Algorithm 1 Stochastic Block BFGS Method; Algorithm 2 Block L-BFGS Update (Two-loop Recursion); Algorithm 3 Block L-BFGS Update (Factored loop recursion)
Open Source Code Yes All the code for the experiments can be downloaded from www.maths.ed.ac.uk/ prichtar/i software.html.
Open Datasets Yes We tested seven empirical risk minimization problems with a logistic loss and L2 regularizer using data from LIBSVM (Chang & Lin, 2011).
Dataset Splits No The paper does not explicitly provide specific training/validation/test dataset splits (e.g., percentages, sample counts, or explicit files).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running experiments.
Software Dependencies No All the methods were implemented in MATLAB. (Does not specify version number for MATLAB or other dependencies).
Experiment Setup Yes We set the regularization parameter ฮป = 1/n for all experiments. We set the subsampling size |St| = n throughout our tests. We tested each method with a stepsize ฮท {100, 5 ยท 10โˆ’1, 10โˆ’1, . . . , 10โˆ’6, 5 ยท 10โˆ’7, 10โˆ’7} for the best outcome, and used the resulting ฮท. Finally, we used m = n/|St| for the number of inner iterations... We set the memory to 10 for the MNJ method in all tests...