Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Explicit Convergence Rates of Greedy and Random Quasi-Newton Methods
Authors: Dachao Lin, Haishan Ye, Zhihua Zhang
JMLR 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 6. Numerical Experiments In this section, we verify our theorems through numerical results for quasi-Newton methods. Rodomanov and Nesterov (2021b, Section 5) have already compared their proposed greedy quasi-Newton methods with the classical quasi-Newton methods. They showed that Gr DFP, Gr BFGS, Gr SR1 (greedy DFP, BFGS, SR1 methods) with directions based on ˆu A(G) (defined in Eq. (9)), have quite competitive convergence with the standard versions. |
| Researcher Affiliation | Academia | Dachao Lin EMAIL Academy for Advanced Interdisciplinary Studies Peking University Beijing, China Haishan Ye EMAIL School of Management Xi an Jiaotong University Xi an, China Zhihua Zhang EMAIL School of Mathematical Sciences Peking University Beijing, China |
| Pseudocode | Yes | Algorithm 1 Random quasi-Newton updates Initialization: Choose G0 A. for k 0 do Choose τk [0, 1] and uk from distribution D which satisfies Eq. (12). Compute Gk+1 = Broydτk(Gk, A, uk). end for |
| Open Source Code | No | The paper does not provide any statement or link indicating the release of open-source code for the methodology described. |
| Open Datasets | Yes | Additionally, we take data from the LIBSVM collection of real-world datasets for binary classification problems (Chang and Lin, 2011). |
| Dataset Splits | No | The paper uses datasets from the LIBSVM collection but does not explicitly provide details about how these datasets were split into training, validation, or test sets. |
| Hardware Specification | No | The paper does not provide specific hardware details (such as GPU or CPU models, or memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers for libraries or frameworks used in the experiments. |
| Experiment Setup | Yes | The starting point x0 for all methods is the same and generated randomly from the uniform distribution on the standard Euclidean sphere of radius 1/d centered at the minimizer, i.e., x0 Unif 1 d Sd 1. We list the name of dataset, the dimension d and the condition number κ under the corresponding γ in the title of each figure. |