Stochastic Weakly Convex Optimization beyond Lipschitz Continuity
Authors: Wenzhi Gao, Qi Deng
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments demonstrate the efficiency and robustness of our proposed stepsize policies. |
| Researcher Affiliation | Academia | 1Institute for Computational and Mathematical Engineering, Stanford University 2Antai College of Economics and Management, Shanghai Jiao Tong University. |
| Pseudocode | Yes | Algorithm 1 Stochastic model-based optimization Input x1 for k = 1, 2,... do Sample data ξk and choose regularization γk > 0 xk+1 = arg min x fxk(x, ξk) + ω(x) + γk 2 x xk 2 |
| Open Source Code | No | The paper does not provide any statement regarding the release of source code for the methodology described, nor does it include links to a code repository. |
| Open Datasets | No | The paper describes a data generation process consistent with a cited work (Deng and Gao, 2021) to create synthetic datasets for experiments, but it does not indicate that the generated dataset or the generation script is publicly available, nor does it use a pre-existing public dataset. |
| Dataset Splits | No | The paper describes parameters for generating synthetic data, such as m, n, κ, and pfail, but it does not specify any training, validation, or test dataset splits. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware used to run the experiments, such as CPU, GPU models, or memory specifications. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or frameworks like PyTorch or TensorFlow) used for the experiments. |
| Experiment Setup | Yes | Stopping criterion. We run algorithms for 400 epochs (K = 400m). Algorithms stop if f 1.2f(ˆx). Stepsize. We let γk = θ K for vanilla algorithms; γk = θ G( xk ) K for robust stepsize with known growth condition; γk = θ max{Lip(xk, ξ ), α} K for robust stepsize with unknown growth condition. θ [10 2, 101] serves as a hyper-parameter. Clipping. Clipping parameter α is set to 1.0. |