Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Stochastic Bias-Reduced Gradient Methods
Authors: Hilal Asi, Yair Carmon, Arun Jambulapati, Yujia Jin, Aaron Sidford
NeurIPS 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our paper demonstrates that our proposed optimum estimator is a useful proof device: it allows us to easily prove upper bounds on the complexity of structured optimization problems, and at least in one case (minimizing the maximum loss) improve over previously known bounds. However, our work does not investigate the practicality of our optimum estimator, as implementation and experiments are outside its scope. |
| Researcher Affiliation | Academia | Stanford University, EMAIL Tel Aviv University, EMAIL |
| Pseudocode | Yes | Algorithm 1: OPTEST(...) Algorithm 2: MORGRADEST(...) Algorithm 3: Stochastic accelerated gradient descent on the Moreau envelope Algorithm 4: Stochastic accelerated proximal point method Algorithm 5: Stochastic composite accelerated gradient descent Algorithm 6: Differentially-private stochastic convex optimization via optimum estimation |
| Open Source Code | No | Our paper demonstrates that our proposed optimum estimator is a useful proof device: it allows us to easily prove upper bounds on the complexity of structured optimization problems, and at least in one case (minimizing the maximum loss) improve over previously known bounds. However, our work does not investigate the practicality of our optimum estimator, as implementation and experiments are outside its scope. |
| Open Datasets | No | The paper focuses on theoretical contributions and does not conduct empirical studies, thus it does not refer to publicly available datasets or provide access information for training data. |
| Dataset Splits | No | The paper focuses on theoretical contributions and does not conduct empirical studies, thus it does not describe train/validation/test dataset splits. |
| Hardware Specification | No | Our paper demonstrates that our proposed optimum estimator is a useful proof device: it allows us to easily prove upper bounds on the complexity of structured optimization problems, and at least in one case (minimizing the maximum loss) improve over previously known bounds. However, our work does not investigate the practicality of our optimum estimator, as implementation and experiments are outside its scope. |
| Software Dependencies | No | The paper does not detail specific software dependencies with version numbers, as it is a theoretical work and does not include an experimental section requiring such specifications. |
| Experiment Setup | No | Our paper demonstrates that our proposed optimum estimator is a useful proof device: it allows us to easily prove upper bounds on the complexity of structured optimization problems, and at least in one case (minimizing the maximum loss) improve over previously known bounds. However, our work does not investigate the practicality of our optimum estimator, as implementation and experiments are outside its scope. |