Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Adaptive Proximal Gradient Method for Convex Optimization
Authors: Yura Malitsky, Konstantin Mishchenko
NeurIPS 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Section 6 (see also Appendix D), we conduct experiments to evaluate the proposed method against different linesearch variants. |
| Researcher Affiliation | Collaboration | Yura Malitsky Faculty of Mathematics University of Vienna, Austria EMAIL Konstantin Mishchenko Samsung AI Center, UK EMAIL |
| Pseudocode | Yes | Algorithm 1 Adaptive gradient descent; Algorithm 2 Adaptive gradient descent-2; Algorithm 3 Adaptive proximal gradient method |
| Open Source Code | Yes | 2https://github.com/ymalitsky/Ad Prox GD |
| Open Datasets | No | The paper generates synthetic data for its experiments (e.g., 'We generated a random y Rn', 'We created matrix A by multiplying matrices U and V') rather than using existing public datasets with specific access information. |
| Dataset Splits | No | The paper describes how data was generated for various problems but does not specify training, validation, or test dataset splits in terms of percentages or sample counts for reproduction, nor does it reference standard predefined splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library names, framework versions) needed to replicate the experiments. |
| Experiment Setup | Yes | An efficient implementation of Armijo s linesearch requires two parameters, s > 1 and r < 1. In the k-th iteration, the first iteration of linesearch starts from αk = sαk 1... The choice of (s, r) matters a lot. For Maximum likelihood estimate problem: n = 100, l = 0.1, u = 10, M = 50. For Low-rank matrix completion: n = 100, r = 20. For Minimal length piecewise-linear curve: m = 50, n = 200. |