Accelerated, Optimal and Parallel: Some results on model-based stochastic optimization
Authors: Karan Chadha, Gary Cheng, John Duchi
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We corroborate our theoretical results with empirical testing to demonstrate the gains accurate modeling, acceleration, and minibatching provide. |
| Researcher Affiliation | Academia | 1Electrical Engineering Department, Stanford University, Stanford, CA 2Statistics Department, Stanford University, Stanford, CA. |
| Pseudocode | No | The paper describes mathematical update rules and iterations (e.g., equations 2, 6, 9) but does not present them in a structured pseudocode or algorithm block. |
| Open Source Code | No | The paper states, "We use and extend the code provided by (Asi et al., 2020)." (Section 6), but does not provide specific access to the authors' own source code for the methodology described. |
| Open Datasets | No | The paper describes generating synthetic data for its experiments (e.g., "We generate rows of A and x i.i.d. N(0, In)" in sections 6.1, 6.2, 6.3) but does not provide specific access information (link, DOI, citation) to a publicly available or open dataset. |
| Dataset Splits | No | The paper does not explicitly provide specific details about training, validation, or test dataset splits for its experiments. It describes data generation and general experimental parameters. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments were mentioned in the paper. |
| Software Dependencies | No | The paper mentions using and extending code from (Asi et al., 2020), but it does not specify any software names with version numbers (e.g., Python, PyTorch, CUDA versions) that would be needed for replication. |
| Experiment Setup | Yes | We use minibatch sizes m {1, 4, 8, 16, 32, 64} and initial steps α0 {10i/2, i { 4, 3, . . . , 5}}. For all experiments we run 30 trials with different seeds and plot the 95% confidence sets. |