High-Probability Bound for Non-Smooth Non-Convex Stochastic Optimization with Heavy Tails
Authors: Langqi Liu, Yibo Wang, Lijun Zhang
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct a preliminary experiment to evaluate our proposed algorithm, with details provided in Appendix C. We train the Res Net18 (He et al., 2016) model on the CIFAR10 (Krizhevsky & Hinton, 2009) dataset, which consists of a training set of 50k images and a testing set of 10k images from 10 classes. |
| Researcher Affiliation | Academia | 1National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China 2School of Artificial Intelligence, Nanjing University, Nanjing, China. Correspondence to: Lijun Zhang <zhanglj@lamda.nju.edu.cn>. |
| Pseudocode | Yes | Algorithm 1 Candidate Generation Algorithm |
| Open Source Code | No | The paper does not contain any explicit statement about providing open-source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We train the Res Net18 (He et al., 2016) model on the CIFAR10 (Krizhevsky & Hinton, 2009) dataset, which consists of a training set of 50k images and a testing set of 10k images from 10 classes. |
| Dataset Splits | Yes | We train the Res Net18 (He et al., 2016) model on the CIFAR10 (Krizhevsky & Hinton, 2009) dataset, which consists of a training set of 50k images and a testing set of 10k images from 10 classes. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions several algorithms and models (e.g., Res Net18, SGD, SINGD, ONCCA, CGA) but does not provide specific version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | We set the learning rate as 0.01 and momentum as 0.9. ... We use β = 0.9, p = 1, q = 10, and multiply the learning rate by an additional factor of 0.1. ... We use D = 2.5 × 10−2 and η = 2.5 × 10−3. ... We use D = 2.5 × 10−2, τ = 10 and η = 2.5 × 10−3. All four algorithms are equipped with a weight decay parameter of 5 × 10−4. |